I am reading Atlas of AI by Kate Crawford, partly because of an interest in all things digital (and the AI ‘thing’ is hot and hotter) and partly because I am a member of the NSW Government’s AI Advisory Committee.
The book is a powerful thesis from a significant thinker and practitioner that suggests a frame for thinking about AI that is profound, intelligent and disturbing.
I’ve just finished the introduction to the book, which stands as an essay in its own right staking out the territory she intends to explore. I shall write more as I read more and add it to the homework I’m endeavouring to get through on a topic that strikes me is one of the top three or four items on any realistic list of big policy dilemmas confronting the world right now.
My quick framing?
This is a debate in which the pace and intensity of its evolution, driven by a relatively narrow and extremely powerful collection of organisations and interests, is easily outpacing our collective capacity to think clearly and design sensibly for its power and potential (good and bad).
Crawford’s thesis is that AI is a venture with imperial aspirations whose sometimes unsavoury instincts for capture, control and compliance (a kind of classic colonial adventure, in that sense) too often escape proper analysis and, as a consequence, proper and requisite constraint.
We already know AI can do untold good and is certainly already doing untold harm. As ever with technology, we’re linked in the perennial contest between invention, innovation, money, power, politics and, somewhere in the mix, ideas of the common good and the public interest which, in this current round, can feel frail and fragile.
I’m expecting the book to be one of the most significant and influential I have read (more on that topic in coming pieces), but I guess I’d better reserve my judgement until I’ve got beyond the introduction.
For the moment, here are some of Crawford’s touch points as she sketches the thesis that will unfold over the ensuring chapters.
“We can see two distinct mythologies at work. The first myth is that nonhuman systems…are analogues for human minds. This perspective assumes that with sufficient training, or enough resources, humanlike intelligence can be created from scratch, without addressing the fundamental ways in which human are embodies, relational and set within wider ecologies.”
She goes on…
“The second myth is that intelligence is something that exists independently, as though it were natural and distinct from social, cultural, historical, and political forces. In fact, the concept of intelligence has done inordinate harm over centuries and has been used to justify relations of domination from slavery to eugenics.”
We’re only on page 5. We should be worried.
She quotes philosophy professor Hubert Dreyfus who raised the possibility that the brain might process information in an entirely different way than a computer. Human intelligence and expertise rely heavily on “many unconscious and subconscious processes, which computers require all processes and data to be explicit and formalized.”
That, she argues, reflects “the ideology of Cartesian dualism in artificial intelligence: where AI is narrowly understood as disembodied intelligence, removed from any relation to the material world.”
This strand of the discussion reminds me of the analysis from Erik Larson (The Myth of Artificial Intelligence, which I’ve just finished; more homework) who’s equally powerful assertion of the role of inference and “abduction” in what makes us truly intelligent resonates with the same anxiety.
Two trenchant passages set the thesis:
AI is neither artificial nor intelligent. Rather, artificial intelligence is both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories and classifications. AI systems are not autonomous, rational or able to discern anything without extensive, computationally intensive training with large data sets or predefined rules and rewards. In fact, artificial intelligence as we know it depends entirely on a much wider set of political and social structures. And due to the capital required to build AI at scale and the ways of seeing that it optimizes AI systems are ultimately designed to serve existing dominant interests. In this sense, artificial intelligence is a registry of power” (p8)
And then this:
“To understand how AI is fundamentally political, we need to go beyond neural nets and statistical pattern recognition to instead ask what is being optimized, and for whom, and who gets to decide. Then we can trace the implications of those choices.” (p9)
The book is called An Atlas of AI and in this first chapter, Crawford explain why the idea of an atlas is so apt.
“The field of AI is explicitly attempting to capture the planet in a computationally legible form. This is not a metaphor so much as the industry’s direct ambition. The AI industry is making and normalising its own proprietary maps, as a centralized God’s eye view of human movement, communication and labor.”
And a little later:
“This is a desire not to create an atlas of the world but to be the atlas – the dominant way of seeing. This colonizing impulse centralizes power in the AI field: it determines how the world is measured and defined while simultaneously denying that this is an inherently political activity. (p11)
This is AI as an imperial venture, constantly threatening to break free from the “practice of justice” and “the enforcement of limits to power” (quoting Ursula Franklin).
The question is what to do.
I’m expecting Crawford to have some ideas, although there are times when I wonder, in this round of the technology-politics contest, the sheer and unprecedentedly concentrated power of the “tech” side is simply overwhelming and, in all sorts of worrying ways, irresistible.
But this is her analysis right from the start:
“The expanding reach of AI systems may seem inevitable, but this is contestable and incomplete. The underlying visions of the AI field do not come into being autonomously but instead have been constructed from a particular set of beliefs and perspectives. The chief designers of the contemporary atlas of AI are a small and homogenous group of people, based in a handful of cities, working in an industry that is currently the wealthiest in the world.”
And in case the point gets missed … “the maps made by the AI industry are political interventions, as opposed to neutral reflections of the world.” (p13)
Getting more practical, this opening chapter asks “what kinds of politics are contained in the way these systems map and interpret the world”? What are the social and material consequences of including AI and related algorithmic systems into the decision-making systems of social institutions like education and health care, finance, government operations, workplace interactions and hiring, communications and the justice system.”
The rest of the chapter foreshadows the topics for the remainder of the book – AI as an extractive industry with dire consequences for the environment, too often exploitative of human labor, of the transfer of data from “mine” or “yours” to simply “infrastructure”, to facial recognition and the dangerous allure of emotion detection to, finally, AI as a “structure of power.”
A final couple of observations from the end of the beginning:
“Artificial intelligence, then, is an idea, an infrastructure, an industry, a form of exercising power, and a way of seeing; it’s also a manifestation of highly organized capital backed by vast systems of extraction and logistics, with supply chains that wrap around the entire planet. All of these things are part of what artificial intelligence is – a two-word phrase onto which is mapped a complex set of expectations, ideologies, desires and fears.”
Finally:
“Simply put, artificial intelligence is now a player in the shaping of knowledge, communication, and power. These configurations are occurring at the level of epistemology, principles of justice, social organisation, political expression, culture, understandings of human bodies, subjectivities, and identities: what we are and what we can be…artificial intelligence, in the process of remapping and intervening in the world, is politics by other means…dominated by the Great Houses of AI, which consist of the half-dozen or so companies that dominate large-scale planetary computation.”
And, in the end, this is why this matters:
“AI can seem like a spectral force – as disembodied computation – but these systems are anything but abstract. They are physical infrastructures that are reshaping the Earth, while simultaneously shifting how the world is seen and understood.” (p19)
Read on…
[…] Source link […]