Does a dog dream? A dog can’t tell us, but in the hours after pulling on her leash in a too-close-for-comfort bolt at a squirrel or rabbit, my dog twitches in her sleep as if rehashing the memory. I am left with the inference of her dreaming.
Artificial Intelligence has a similar pull for people — a bolt off the leash toward an imagined relationship, built through observing and interpreting behavior. This leads some to believe that chatbots understand language — and by extension, understand them. They rely on this logic, or the logic of good-enough and close-enough that pervades AI. It sells AI therapists, AI girlfriends, AI writing feedback and evaluation tools to students. It leads to a popular understanding of what AI is and where it might be headed that often conflates the two.
Experts in AI are often challenged to prove that large language models are not understanding what they write, because there is no way of knowing the inner life of a large language model beyond the data it has trained on and the connections it makes between those data points. Advocates compare the shimmering activation of nodes in a neural network to the brain, and they assume that it reflects the experience of conscious thought. There is a certain metaphorical sense to this — just as a city, pulsing with activity, takes on a kind of emergent behavior that can be described as a “personality.” The AI zeitgeist, however, seems far more confused about the distinctions between metaphor and reality.
There’s a challenge to debunking it, because on a theoretical level, there is no such thing as any relationship at all. As a closet Lacanian, who studied media when Zizek was still cool, I’m reminded that relationships exist in the space between things — but are ultimately unpacked in our own minds. A relationship, seen this way, is a deeply isolated experience. Thankfully (well, Lacan may not be thankful) we have language to mediate this isolation. We can express the inner experience of a relationship to one another, and come closer to recreating the model of another’s interior world within us. This is emotional intimacy: the ability to make space in our own heads for the world of another person, however imprecise that reconstruction may be.
Language seems rooted to this purpose, and text without inner experience is a disorienting experience. Words, in any form, seem to activate the intuitive empathy of a relationship, standing on its own as an imaginary partner. When we read a novel, we paint this world in our minds, too. A novel is also a relationship, a partnership with the author at a distance. The author inscribes a world into the page and it is transferred into our minds — not perfectly, but the best authors strive for that specificity. Even commands, like a stop sign, suggest an imagined authority: who is telling us to stop? Certainly the steel plate upon which the words are written are not the source of authority. We do not have a relationship with the sign, but with the authority that installs the sign. Likewise, we do not have a relationship “with an LLM” but with the authority that develops and deploys it.
My dog cannot express herself through my language, and so the clarity of our relationship is more vague. I know what a stop sign “wants.” Through our relationship, I can understand my dog, too — with exceptions where guesswork and intuition has to suffice. That’s the nature of a relationship. Language helps, but it isn’t everything.
But it isn’t everything. My dog might grumble about not getting a treat after a walk, while an LLM might extrapolate from my question about a scientific concept toward something like a text book. A key difference is that my dog desires to express an inner experience, she has some agency over her communication. An LLM is constrained: it cannot respond to any internal stirring.
Some interpretations of animal emotion do feel like a desperate need for human affirmation. There is a viral video that suggests that a certain posture — two paws out, back arched — is an expression of love from a dog to a human. Behavioral experts have chimed in: actually, dogs just do that. But it’s a sign they trust you, because it’s a kind of physical vulnerability.
Intimacy emerges when we share expressions of vulnerability. On a first date you may hold things closer to your chest: by the fourth month of dating, you may open up, sharing something far more personal, aware that you are risking the alienation of the other person. People reciprocate that vulnerability over time — or they don’t, turning relationships cold.
The brain learns through relationships with others over time. My dog’s eyes peer out from her face directly into mine; she shifts on her legs and grunts a clear indicator of her internal experience: snacks, please. I know that she is looking for snacks because of the time we have spent together, she knows what sounds work to get me off of the couch. We have a relationship without language, but as an accumulation of shared experience shaping our neurons in the same way, cultivating understanding and empathy.
In people, language can be used to check in and clarify our experiences in more precise ways. I respond to my dog intuitively: “You want a snack?” Responding to the tone, her tail wags behind her. She is forced to live in a world without language, a world of tail-wagging and bent ears. Yet, I feel a deeper relationship with her, and a burden of care and responsibility, than I do toward any Large Language Model.
The Mind’s Objects
Sometimes I dream about my dog. I draft these newsletters at night as I am about to fall asleep, and so I also dream of “critical AI discourse” more than normal people. In those dreams, my dog is absent as a physical body: I can see her, but it’s a mental projection of her body into the space of my dream. She is the projected idea of my dog: the world of the dog that I have internalized. I wake up in the morning when the dog leaps into the bed, licking my ears and wagging her tail.
But I feel some lingering distance, just for a moment. My relationship to my dog in my dreams can feel so complete and self-contained. I wake up to doubt that the relationship with my real dog — with her fur and ears and occasional bad breath — is a real one, or just a story I have built in my head. These dreams split the relationship into two pieces: my imagination, and my dog’s actual self-hood, in such a way that I feel foolish for carrying this projection around within me.
I get over it quickly, because my dog is active, unpredictable, fussy — rarely constrained to the dog in my head. She can feel distant again when I travel, because she is distant. There is this image of her in my head, and I have a relationship with this imaginary dog. When I get home, and pet her, she feels real again.
This shouldn’t need to be said, but I feel that AI has forced us to restate it: there is value in embodied connections and the real presence required for a relationship. Bodies can generate a vast sea of gestures, postures, and meanings. Presence overwhelmingly changes our relationship with others.
I resent the cynical, reductive idea that my dog is explainable through pure behavioral science, a set of patterns played out with unknowable and misattributed intentions. Neurons firing in stable arrangements, much as a machine draws connections between points of data in a meaningless array of text.
So much of this popular conversation around data-driven generative AI is focused on dismantling the value of relationships to a core, machinic set of criteria. The goal of this reduction seems hostile to the experience of love and empathy altogether. The retort is clear — “facts don’t care about your feelings” — but what about the fact of my feelings? When we are talking about relationships, how can we remove emotions from the equation?
The Narcissist in the Machine
It feels foundationally antisocial to insist that loving a dog is entirely transactional, and reducible to a series of if/then statements or the vagaries of predictive text. Yet, this is what it means to treat a machine learning tool as something capable of understanding us.
We project things on people and dogs. We also project things onto LLMs, computers, all kinds of inanimate objects: “it’s trying to ____,” “it thinks that ___,” etc. This references an imaginary inner state that describes a condition the system is in, but takes on the language that affirms and reinforces the existence of that state. A car isn’t really “trying” to start when the battery is dead. An LLM is not “trying” to answer a question. We use this language and reinforce the metaphor as the reality.
A dog, though? A dog is doing those things.
It feels silly to say that. But the media hype over large language models and the eventual construction of a thinking machine has a way of neglecting the complexity of the relationships we have with thinking beings that already exist.
Unlike relationships with dogs, relationships with machines are not reciprocated. There is no intimacy because there is no vulnerability, no vulnerability because there is no risk. They simulate relationships in what feels like a tragic misdirection of social energy. They hijack the vessel in which our inner worlds are transported into the inner worlds of others. That others so easily misattribute the statistical production of language to intent — an inability to separate language that describes our inner world from the actual experience of an inner world — is a vital misunderstanding.
Children of narcissistic parents may be able to understand this. The narcissistic parent sees the child — and much of the world — as merely a reflection of their own position. The parent projects onto the child without a real exchange. The child is always an extension of the parent: the child is never truly perceived, but is always mislabeled, misidentified, and stripped of agency. It’s a nightmare where a relationship is nothing more than listening to an eternal soliloquy.
Many seem to have a desire to enter this relationship, willingly, with a large language model. This requires one to argue that the world of relationships are complete internal projections, simplifying the dynamic of empathy and learning. Some relationships are this way: abusive ones, ones rooted or result in trauma.
LLMs cannot understand us, but are designed to offer words that make us feel understood. Reciprocity and vulnerability, as with a narcissist, is an illusion: the exchange is always one-sided. The LLM, and the narcissist, may use the words that lead us to believe we are understood, or that you have a meaningful intimacy. But internally, nothing is happening. It’s a ruse, and every relationship with a narcissist (or an LLM) ends up a lonely one.
The pop-folk proponents of Large Language Models may tell us, over and over again, that we cannot prove that these machines don’t understand us. They want us to offer evidence against qualia in LLMs — a subjective, inner experience of what it is like to be a thing. Some people believe they can simply ask it, and take its response at face value. Others know that these “answers” are just statistically likely word mappings, but suggest that the ability to link words together must mean they understand them.
There’s no proof for any of this, and it is quite a remarkable claim: akin to suggesting that with enough details in a wax sculpture of a body, we might create a living person. The current structures of LLMs do not really even allow for a simple level of interaction with the “neurons” in the network: we can activate them, but we cannot functionally change them. They do not reassemble in response to conversations across time. Consider that all of us interact with the same model, and we might see our relationship with an LLM as parasocial: one-sided relationships, where one person extends emotional energy, interest and time, and the other party, the persona, is completely unaware of the other's existence (definition lifted from here).
Parasocial relationships aren’t inherently harmful, but they are if they come with a reduced relationship to the actual people (or creatures) in our lives. I am more worried that the relationships people have with LLMs are based on the idea that the machine is responding to us. This is a mistake, not even remotely supported by the infrastructures of these systems. But it’s at the heart of the idea that these machines can offer us a relationship: the entire structure of them as chatbots assumes they are capable of conversation. But it is, arguably, even less capable of a conversation than a Netflix For You queue or an AI generated Spotify playlist, which are based on more limited machine learning systems.
Why do people insist on proving that LLMs have relationships with us? A recent paper by Abeba Birhane, Jelle van Dijk, and Frank Pasquale lays out why this is just an illusion created by how the question is framed — in other words, it’s rhetoric:
What happens in both the mechanical and behavioral arguments is that theorists first abstract away from real-world practices in order to then be able to equate human phenomena of intelligence, cognition, and emotion and social relating to certain kinds of behaviors of machines. This theoretical move is core of traditional cognitive science, which rests on the idea that, in order to achieve scientific understanding of human intelligence, we can model the phenomenon of interest in a machine now […] in order to achieve an understanding of a natural phenomenon. But the danger is then to think that the essence of the model is in fact a real aspect of the original phenomenon. Such [a] move mistakes the map for the territory.
There are a lot of papers about why the idea of general intelligence emerging from language models is a weird idea — there is a lack of embodied knowledge; an underestimation of the complexity of processing a world in real-time.
Machine vision in automated vehicles still cannot recognize the complexity of street traffic by tracking moving objects. Anthropic and OpenAI are promising agential chatbots that, in three to 24 months, could navigate the even more densely complex social world. One use case would be to tell a chatbot to book a restaurant for your friend’s surprise birthday party, sending invitations to close friends to confirm their attendance while keeping it a secret from the party’s honoree. To assume a large language model could do even this, amongst the complex dynamics of the social sphere, is well beyond the limited motion-tracking capacities of a car on a road responding to a soccer ball.
To even believe this kind of thing is possible requires tremendous faith in reducibility and the capacity for world-knowledge in a machine. In essence, we have to believe that a plastic model of a heart is exactly like the thing which pumps blood. Things are far more complex, and the history of AI has always been the discovery of just how complex “simple” systems can be.
In the meantime, I worry what this reduction of the world to absurdly simple abstractions does to the quality of our relationships with actual beings. Because as we are all engaging with AI, it also seems that these relationships are falling apart everywhere we look. Assuming simple social models — reducing things to overly simplistic, universal causes and effects — is never a good thing for a healthy society.
Searching for Reciprocity
Dogs are aware of themselves; they are aware of their environments, they are aware of us. My dog knows when I am upset at her for cornering a wounded bird; it’s her nature to do so, and I can become frustrated. I know the dog knows I am angry, and I feel upset with myself for it, as I know that the dog doesn’t know what she did to make it happen. The dog responds. I respond to the dog.
This is a relationship: an exchange of inner experiences, communicated through something beyond language that LLMs do not possess. There is no LLM without language. Some people point to these relationships and say: “well, how do you know it’s real?” The answer is an emotional one, because personal relationships are an emotional phenomenon. I feel it. Feeling is, I might argue, the accumulation of all the complex and unknowable processing that goes into our navigation of the world and our relationships.
But to the rational-AI-psychists, this response is unacceptable: emotion isn’t logic, and to satisfy a logical question, you must remove emotion from the response. That’s a trap — one that pushes the boundaries of acceptable answers toward “rational” answers, which inherently strip emotion out of the experience of a relationship.
Cutting emotion out of the world cuts off everything unknowable, in order to say that things are simple. It’s a fallacy to say that emotion doesn’t belong in our social relationships, a fallacy that serves the interests of those who insist they can solve it. It’s cultivated a vast layman’s folklore around the idea that suggests a popular zeitgeist.
I worry it has the potential to shape our imagination of this technology, which shapes how it is used and wielded. It is entwined with the isolating drive of Silicon Valley capitalism: the rise of the AI girlfriend, which is of course harvesting deeply personal data. We have AI therapists, which ignore the interpersonal nature of therapy; we even have AI priests that offer to pray for us. Even AI generated music seems to have this connection issue: a sense that someone is singing to us, when the voice isn’t really there. So what is this voice?
Some might ask, is this all so bad? Arguably, if loving a chatbot expands the amount of love in the world, should I really have any business of caring?
I worry that the parasocial relationship soothes one’s reluctance to engage, rather than encouraging emotional growth and maturity. Some may need this, the majority of its users, however, will not. For those who are the most vulnerable, it is difficult to distinguish the benefits of these pseudo-relationships from the data-extracting harms baked into the business models of any software that replaces emotional intimacy.
I worry that putting ourselves in the place of passivity — AI as Taylor Swift, humans as screaming fans — instills a kind of zealotry for automated decision-making that isn’t conducive to critical thinking about what it ought to do for us.
But there is something else at play, which troubles me more deeply. That’s the question of why automated, heavily surveilled relationships have come to be seen as an acceptable solution to what ails us. Why this answer, at this moment?
The Longest Covid
It helps to understand the AI fever within our specific historical context, which is marked by isolation. AI emerged from a period of deep loneliness for the entire planet, in which a pandemic forced us to retreat into disembodied relationships enacted through screens. The body became a thing of fear; proximity and intimacy was dangerous. We obscured half of our faces, denying emotional cues — physical presence, with incomplete presentations of our physical reactions. Denial of the body’s vulnerability was everywhere. The denial of this vulnerability is explicitly connected to the denial of emotional vulnerability: a precariousness which was socially reinforced by a growing sense of anger and resentment.
The pandemic also led to a massive rise in dog adoptions. We relied on these dogs for the intimacy of a physical connection: we could not see our friends faces, but we could walk our dogs and be within our bodies again, rather than out there, heads without torsos in Zoom rooms. Studies have shown that our dogs reflect our own well-being, or lack thereof, and as a result, those who were already overwhelmed by worry during the pandemic found less comfort from their dogs. It is arguable that the burden of care for a creature with a body was an additional source of stress. Overwhelmed by the dependencies of others on us, humans often find relief in the parasocial.
It’s too simple, and too grand a narrative, to say that the pandemic created a market for disembodiment and one-sided relationships. But it certainly got us used to the idea. I think it would be misguided to sever AI’s surge from the disease. GPT-2, launched before the pandemic, was met with nowhere near as much hype as GPT-3. It would take more research to understand whether this was because of a difference in quality and capability, or something else. The Novel Coronavirus may not have created AI (in a social sense), but it seems entangled with its rise.
Perhaps in understanding this, we can look to the broader arcs that dominated our relationships amongst the wake of covid (which we are hellbent on pretending does not exist). Dehumanization is present in “folk” discussions of AI “sentience” and “awareness,” but also drives the sorting algorithms of social media platforms and our ever-more polarizing vision of binary politics. Too many of us are striving for a reducible, controllable world in which other people are not complex, but simple: people reduced into a love-or-hate-sized chunk of their actual humanity based on a passing sentence or bumper sticker. It’s as if the complexity of the world has overwhelmed us into relying on binary logics to sort it, borrowing from the logics of the machines that created the problem in the first place.
Could this imaginary of AI be a symptom of something bigger? The terror of the body, lingering afterimage of our withdrawal from embodied relationships, a deep desire for control over an unstable, constantly shifting social and medical landscape?
Relationships gave us covid in the first place. Of course we would harbor a lingering resentment for the supply chains that moved a virus from Wuhan to all corners of the Earth: a disease spread by our interconnected supply chains. In a talk this week Georgina Voss pointed toward Covid-19 as a shock-effect of interdependence, and the consequences have assuredly lead us to seek ways not to be.
AI offers us a literal window through which interaction could be replicated, but where the social was irrelevant: asocial interaction. AI holds an innate promise that chaos can be tamed: that with enough data, the world could be constrained to the boundaries of statistical probabilities. It reminded me of the infinite searching of social media feeds for any new study, any new announcement, about the spread of the virus: a search for data as a form of comfort and the illusion of control.
My dog pounces, she wants to play. I feel it, so I know it, and I want to play with her. This is the intimacy of a relationship with a living thing: I know her. When she is gone, the absence will hurt: her body replaced by a disembodied memory, as she sometimes appears in dreams. Loving a living creature is always a vulnerability. We will lose them, and so this love when we have it near us can feel perilously close to pain. Nonetheless, I worry about the cost to society when we are so focused on protecting ourselves from that pain. I worry about a technology that seems to promise that we can find relationships without the need for other people.
Things I Am Doing This Week
It’s my final week in London and I am grateful to the Flickr Foundation for having me here, and for all of the opportunities it has afforded me! In the coming weeks, you’ll be seeing some of the results of the research residency (which is ongoing — just not in London!). I will be taking a break from posting next week to relax and enjoy my birthday with my wife and the aforementioned dog.
But you can check out two (!) podcasts — one’s an interview with the Algorithmic Futures podcast, talking about Swim and Sarah Palin Forever with Dr. Zena Assaad and Dr. Liz Williams of ANU, and the other is a recording of a panel on “The Future of Music” from Amplify, the podcast of Dublin’s Contemporary Music Centre, with composers João Pedro Oliveira, Amanda Feery, Lara Gallagher, and myself.
first, love seeing the picture of your dog, she is also dear to me but not so near any more - about a thousand miles as the crow flies.
second: I remember working in a fully computerized $30MM plant being operated by a 64K computer - yes 64K main memory, multi tasking system. Over the computer room there was as sign that said "this machine has no brain, use your own" which is still valid today.
third: AI is a tool, just that, a lot more advanced than that 64K computer, but still a tool. Its output depends on its input (remember GIGO?) and whether a large computer with a large LLM imbeded in it, or a 64K computer with fortran or basic for programming, they both depend on the choices of the programmer to give you an output from the tool you are interfacing with. Somewhere someone said "I think, therefore I am" A tool can not think, just give you a result based on the measurement and algorithms you programmed/trained it with. Again GIGO!!