Do you ever feel like, in every conversation you have about AI, there is a set of assumptions that steers the conversation into unhelpful directions — even among people who *know* these shorthands are problematic?
My latest article for Tech Policy Press is an effort to name them — to help us recognize our reliance on those myths and to call them out in our own thinking.
I’m grateful to Daniel Stone for chatting with me about his work on the frames of AI for this piece!
Click below to get the whole piece, or keep reading for the intro.
Aside from the physical heft of data centers seen from highways and the fiber optic cables crawling into homes and offices, the digital world mostly exists in our imagination. That imagination is shaped by the people selling services that rely on that infrastructure. This has given rise to a mythology of technology that aims for simple, graspable explanations at the expense of accuracy.
Companies marketing technology products and services wield enormous influence over our understanding of things. Marketing can lean into complexity to obscure the challenging aspects of these products or smooth out any rough edges through oversimplification, and the designers of marketing materials play a key role in this process. It isn’t always with ill intent. As a profession, marketing relies on myths to help us understand these technologies, and myths animate how designers imagine these systems.
A competing set of interests is at play: New technologies need simple metaphors to thrive, but simple metaphors aim to reduce complexity. Meanwhile, corporate boardrooms and founders believe in (or at least invest in) compelling myths and reward communications specialists for reinforcing these myths amongst consumers.
Given their origins, these myths inevitably skew to the techno side of techno-social equilibrium. They pollinate the social imagination with metaphors that lead to conclusions, and those conclusions shape a collective understanding. But if we want a socially oriented future for technology, we need myths that animate the social imagination of technology rather than overwrite it.
Why do these myths matter? Daniel Stone, Director of Diffusion.Au and a researcher at the Leverhulme Centre for Future Intelligence at the University of Cambridge, examines the frames we use when discussing AI and how it shapes our response.
“Myths and metaphors aren’t just rhetorical flourishes; they are about power. They tell us who has it, who should have it, how they should use it, and in the service of what goal,” he told me in an interview over email. “By actively choosing myths and metaphors that perpetuate a healthy, collaborative, ethical, and accountable understanding of AI, we can help ordinary people better understand how this technology works and how it should be used.”
Before creating beneficial myths to direct our use of technologies, we first should understand the current myths surrounding artificial intelligence. Myths don’t have to be cynically deployed, though they aren’t always innocent. Many seem obvious, but they continue to shape our thinking around AI. They infiltrate our thinking through convenience, as reliable shorthand, and as commonly understood references.
I’ve sorted a handful of these myths into rough categories. There are certainly others. The goal is to provide a way of looking at technology that scratches against the spectacle of metaphor to think more clearly about what AI is and does.
While I’m Here…
I’ll be speaking at the Gray Area Festival in San Francisco on September 12-15 alongside Lynn Hershman Leeson, Rashaad Newsome, Casey Reas, Trevor Paglen, Lauren Lee McCarthy, Ranu Mukherjee, Morehshin Allahyari, and Victoria Ivanova!
You can find more info and buy tickets through Gray Area, which has a wildly generous collection of past talks.
many years ago (over 35) I worked in a fully "computerized" plant, where operators interface with th system through UMMI's (crt+keyboard) to operate the different functions of the plant. The digital computer ran a UNIX system and had 64K, yes 64K!! To ke3p the folks running the plant focused on their part of the task, I put a sign over the computer room that read "this machine has no brain, use your own" I think that we are falling into the same fallacy with AI. AI is a TOOL, not a brain, and it still relies on data input to generate responses, and like my FORTRAN teacher ( giving my age with this) used to say, GIGO, and I'm sure you know what this means. So do not give up the autonomy of your decisions, to a machine, just use it to give you the data to make your decisions, not to replace them.