I’m not the dream police. Dream about whatever you want to dream about. But be skeptical when you see dreams being designed for you by somebody else.
Some important folks keep philosophizing about Artificial General Intelligence. You can get a good sense of the tone of that discussion from a few recent Twitter exchanges, where the head of OpenAI’s AGI made the preposterous claim that “it may be that today's large neural networks are slightly conscious.” (Today, he wrote: “In the future, it will be obvious that the sole purpose of science was to build AGI.”)
That seems bold.
Artificial General Intelligence is, for some, the goal of AI development: a machine that can figure out how to do whatever you ask it to do. For some, this would require that the machine could think about what it is thinking about; for some of that cohort, this metacognition is inseparable from consciousness. But consciousness is not easily defined, and questions from “is a dog conscious?” to “is a rock conscious?” will find intelligent defenders on all sides.
My consciousness is not your consciousness. Today, questions of machine consciousness are about as relevant and answerable as asking what style of eggs we’d like to eat on the sun.
The only paths to AGI we see at the moment are unobtainable or unproven. Consciousness itself is knotty and loosely defined.
It’s a lot of energy committed to science fiction. There’s nothing wrong with science fiction, or with imagination being infused with technology — unless and until the imaginary distracts us from what we could be doing and what we could be building.
There are plenty of reasons that AGI is not the “end-state” of AI research (or, God help us, the entire history of science). For one, AGI is a monopoly: one system to rule them all. Far nicer for those being ruled would be isolated fiefdoms where human agency can still exert some control: why build a machine that can vacuum, do your taxes and control your oxygen levels when you could build three machines that do each thing just as well? (For some, that *is* AGI — building and linking these distinct devices, but surely that’s just the internet).
What’s more interesting to me is the matter of what these discussions do and what functions they serve.
The AGI Shortcut
To understand the function of the “AGI discourse,” I can draw on a few personal observations.
I’ve been in a few design workshops as either a facilitator or participant. In these workshops we’re assembled to focus on a particular problem and sort out how we might develop tech to solve that problem. There’s a lot of critique about this model, and I’m dedicated to advancing those critiques, in much of my work. I know the limits to this approach.
Better Speculative Design conversations tend to focus on stakeholders, most of whom are not in the room. A good session will have experts and representatives present to talk about needs, bringing their expertise to the designers who then use their expertise in design methods to shape something responsive to those needs. Sometimes, for the sake of diversity of thought, those assembled are not trained design professionals, but members of the public, or policy people, or even politicians, diplomats, local industry leaders, etc.
The larger group will break into small groups and flesh out ideas. Without fail, someone in one of the groups will start talking about Artificial General Intelligence and how that might be deployed to solve the problem.
I don’t fault these people. It’s an inevitable result of invite untrained folks into design conversations oriented toward speculation or futures or the like. This is, in some ways, a byproduct of that diversity of thought — you want “outside the box” thinking, sometimes folks are gonna talk about AGI or astral telepathy. They come to bring a perspective and they have one.
With AGI, something happens. Suddenly, the group is debating machine consciousness instead of the problem. The needs they’re trying to solve for are quickly derailed for the purpose of having a fascinating conversation about consciousness and when robots will have it.
The imagination for proximate possibility (or the “adjacent possible”) is replaced by an imagination for distant possibility. There’s an understandable reluctance to discourage distant possibilities, because sometimes there are ideas there that can be brought back and made useable.
But you can also lure people into a conversation about hypotheticals that distract them from working. We’re meant to be cultivating techno-social imaginaries: that can be empowering, informative, and transformative. If someone finds that one afternoon in an AGI conversation at a workshop, great.
But what happens when that is always happening — like we see in so much of the discourse within the AI design space?
AGI as a Language of Non-Thought
Imaginative capacity has an attention span, and many of the matters that design attempts to solve — from climate change to false incarceration — are urgent. There is a limited supply of imaginative commitment in the public, and turning that imagination toward concrete actions demands a lot from it.
Distant possibility is safe, because the distance from our current world offers a distance from its politics. There are no stakes: the people are imaginary, and entrenched social challenges are dismissed as externalities: “assume we solve that.” When we move to proximate possibilities, though, we’re outlining steps, critiquing the politics of power in each step, and finding ourselves immersed in tensions that are difficult to resolve. We ask who decides and how those decisions get included. Each of these steps asks us who we must become to decide what technology must be.
And at any point in that discussion, AGI can solve the problems of politics. It says: “just wait for new technology.” It’s not intentional, when it’s done by random people fascinated by philosophical discussions about consciousness who show up for design conversations.
In this sense, discussions of AGI function as a thought-terminating cliché, a tactic shared by commercial advertising and totalitarian regimes. They’re discourse breakers, a go-to for when conversations are overwhelmed by cognitive dissonance. Hanging a “Workers of the World, Unite!” slogan on your grocery store window in the former Soviet Union wasn’t about showing your support for united workers. The state didn’t use it for that purpose, and neither does Snickers care if you believe that it’s really “what satisfies”.
Rather, both serve as ribbons to wrap around a conversation: a way of saying “in summary…” before a conversation gets too deep. Why are we Communists? So the workers of the world can unite. Why should I eat a Snickers? Because it satisfies.
Why are we doing all this AI stuff? Well, you could get into a messy conversation about what priorities we might set for our technologies, and then explore who decides and how they decide. Then you might ask all kinds of questions about why that isn’t happening.
Or you could say, “the end goal is to create an artificial general intelligence, one that can do smart things very efficiently, and solve many of our problems.” Then we have a shared goal, an outcome that justifies the decisions. Feel free, then, to talk about all the ways that these things might come to be, and all the things it might make possible, and all the things that might go wrong with that imaginary tech. Just don’t ask about what we’ve got out there right now. AGI Satisfies!
When artists or science fiction writers do it it’s one thing. They’re tasked with dreaming. But when it’s coming from tech company CEOs and the academics, think tanks, and artists they support, it’s a much bigger issue.
Imagination and Power
When CEOs or folks with billion dollar trusts start luring us into visions of distant possibilities — even scary ones, like world-ending robots — they create a vacuum around the proximal possibilities (and work) they don’t want to deal with.
That’s why I greet AGI / Consciousness conversations with deep suspicion. They usually start with these folks, then trickle down into the culture, reflecting a broader and often unstated political goal.
The same applies to many in the AI Ethics field. If one’s approach to AI ethics is to talk about technology that doesn’t yet exist, that’s not doing AI ethics. That’s pitching bad sci-fi pilots. Even sci-fi is meant to reflect current sociopolitical issues and questions.
It’s possible to ask too much of the imagination, and this can be a terrible constraint. I would never ask people to restrain what they want to think and talk about in the search for new ideas. Possibility is a border that always needs to expand. But imagination is intended to go beyond the termination of thought, beyond the cliché.
AGI discussions tend not to move outside of the discursive frame that they were invented to solve. So when artists work uncritically within the boundaries of that frame, they aren’t really challenging or extending imaginaries.
There’s a Buddhist aphorism: “After ecstasy, laundry.” The idea is that the enlightened being gets their a-ha ya-ya’s out, and then turns back to the work at hand. Maybe it’s less inspiring for designers and artists to adopt “After ideation, critique” as a maxim for working (many do). But if you want what you imagine to be useful, you have to find useful imaginaries.
So sure, let’s dream about landing on Mars, let’s dream about Rosey the Robot and KITT. But acknowledge that dreams of AGI are not our dreams. They come from somebody else, likely serve a purpose we may not be aware of, and any inspiration we draw from them needs to be grappled against a world with bodies.
Anyway, if you dig this, please follow me on Twitter and consider subscribing to this small corner of the Internet or sharing it with those who might also dig it. Thanks!