I’ve been at RSD10 all week, an annual conference focused on systems thinking and design. I was delighted to work as a facilitator and on the planning committees for two of the workshops.
In this post I’ll share a bit about one workshop — “Technology for Living: #NewMacy in the 21st Century,” part of the ongoing New Macy conversations that have been taking place with the American Society of Cybernetics. It’s developing a useful framework for turning critique of systems into concrete design. While it’s focused on artificial intelligence’s capacity to show particular pathologies, I think it’s useful for a lot of things. It was presented by American Society for Cybernetics president Paul Pangaro alongside myself and Michael Munton.
In previous newsletters I’ve written about Paul Pangaro’s “Pandemic of Today’s AI,” a framework for understanding a series of tensions that exist between digital and analog frameworks, or, less pointedly, between socially animated and data animated design. Paul has been introducing the New Macy sessions with some version of this illustration, which I’ve referenced before.
In the workshop, we took some time to break apart what it means for AI to be a “pandemic.” Essentially, we have a technology that has worked its way into our lives with insidious consequences. These consequences are often considered the core disease: Facebook’s weaponization of emotions and political polarization, for example. Or the deployment of surveillance systems that reinforce data collected through biased records and processes.
If we think of these as the disease, we might go after them individually, breaking them up into slices to fix. I want to suggest another way to understand these failed systems.
Cyber Physical Symptoms
I want to talk about symptoms. A symptom tells us that something in the system is amiss. An expert's work uses symptoms to identify that underlying disease (or “dis-ease”). When a system isn't working, we see evidence popping up in unusual places, in incomprehensible or dangerous pathologies. Undiagnosed, they metastasize, damaging our ability to move through the world, relate to one another, and can weaken our ability to protect ourselves against new ailments.
I suggest that we think about the pandemic of today’s AI as social symptoms that reveal something misaligned in our social systems. These symptoms express themselves in the social body, like tumors or rashes or coughs express themselves in the physical body.
In the workshop, we operated on the idea that design for today’s AI and cyber-physical systems has its own set of symptoms. (Perhaps we can call them “cyber-physical symptoms.”)
Modern technological systems are more and more an extension of, or intrusion into, the social “body.” But because of scale, the symptoms are contagious, viral, and communicable. This can be physical: Facebook's algorithms depress and anger us through automated curation of images and conversations. We adapt, adopting unnatural and counterintuitive new approaches to online social spaces in response.
We can return to Paul’s introduction and the "Pandemic of Today's AI." The AI pandemic is spread through the hands of designers, and infect the social body. The products we build are symptoms, the result of a digital supremacist view: the imbalance toward one side or the other.
I think about Paul’s tension framework as a medical chart for assessing the symptoms, and getting at the roots, of this AI disease. The systems show symptoms. And those symptoms emerge from overreliance on one column — the "digital" — at the expense of the other — the "social."
We started by asking which pairs of tensions contribute to the flaws of current AI systems, and then collected new pairs of tensions that might be evident. For the next set of conversations, we treated these dichotomies as tools for analysis, reflection, and imagination: how might this be different?
We looked at our list of cyber-physical symptoms: intrusive AI, invasive data collection, racial bias in surveillance systems, etc. We have the diagnosis: an imbalance between data-animated and socially animated systems, for example. And then, we begin to contemplate the vaccine — what “doses” of the digital or analog integrate into the other safely?
Moving from diagnosis to treatment — critique to action — is a meaningful challenge. It's tempting to think we can just remove the tumor: slice out the digital, leave the social, problem solved. But the social isn’t “pure” either. In fact, many problems of AI can be diagnosed through the lens of over-reliance on the social: a naïve view of race and police data, for example, over-emphasizes a social view of trust and relationship.
It may be more helpful to approach these problems as if designing a vaccine.
Hormesis
There's a medical concept, Hormesis, that goes back to the first European medical chemist, the Swiss alchemist Paracelsus. Hormesis is defined today as “a biphasic dose response to an environmental agent,” but Paracelsus defined it more simply in his day, writing that “all things are poison, and nothing is without poison. Only the dose makes the difference.”
In a vaccination, you're bringing together an appropriate amount of the virus to strengthen the body, rather than weaken it. It stimulates a response, so that we can be better prepared for real threats.
In break-out sessions, we asked participants to think about the tensions they laid out, the imbalances toward one side or another, and then use those tensions to develop a vaccine through them. (We warned against taking the metaphor too seriously: a vaccine for today’s AI might work differently than a medical vaccine).
What do we get if we look at pieces of each extreme, and fine-tune the proper proportions for the social body. How can we make that social body stronger?
What might we build if digital approaches were integrated into existing systems in sustainable doses, rather than intruding into the body in poisonous ways? What does the design process for AI look like when the digital and analog are in balance? How do you blend dynamics without overpowering one or the other? What do we want to change, and why? What values are reflected in those decisions? What might this so-called “vaccinated AI” look like as a design proposal?
Further questions abound, of course: which social body? What decisions can we make, and on whose behalf? Who resists the vaccine? Who decides how it is distributed?
The result was focused on a number of different outcomes, from the Metaverse to Misinformation Campaigns to Surveillance Systems. A number of new tensions emerged, too, for example, between the individual and collective — when we look at an individual through a “digital” framework, then we can optimize their pleasure in ways that harm the collective. Even solutions viewed through this lens become more complex: if we allowed data ownership, or paid people for data, then what about the social and collective harms of filter bubbles and optimized media delivery? When we look at finding the appropriate dosage of that “digital individualism” within a “socially animated” framework, for example, we might find a greater potential for balance.
Things I’m Reading This Week
####
Neighborhoods Watched: The Rise of Urban Mass Surveillance
Project by Michael Isaac Stein, Caroline Sinders and Winnie Yoe
What happens when cities build layers of data-collection practices into layers of bureaucracy, with no central planning authority or transparency? This report looks at New Orleans as a case study of what happens when plausible deniability is the central organizing principle for a city’s software stack.
“New Orleans’ surveillance apparatus isn’t so much a comprehensive system as it is a sprawling, decentralized and constantly changing patchwork of tools maintained by various city departments, semi-independent agencies, private nonprofits and federal and state law enforcement.”
####
The Limits to Digital Consent
Simply Secure + The New Design Conference
A white paper describing the output of research interviews with marginalized communities around their navigation of digital consent. It is a useful guide to the limits and outdated nature of these consent agreements.
####
Homo Imaginatus
Philip Ball
I question some of what’s written here — my dog dreams, clearly, and her legs seem to act out the chasing of rabbits after she’s seen a rabbit through the window. She investigates the window because she imagines the rabbit might be there. I can’t commit to the idea that imagination is what makes humans “unique,” though I know rationalist killjoys will tell me I have no proof that my dog dreams or that chickens feel pain or whatever. I get it. Nonetheless, I can imagine that they do, so look at me, being human after all.
There’s reason to suppose that imagination is far more than a quirky offshoot of our complicated minds, a kind of evolutionary bonus that keeps us entertained at night. A collection of neuroscientists, philosophers and linguists is converging on the notion that imagination, far from a kind of mental superfluity, sits at the heart of human cognition. It might be the very attribute at which our minds have evolved to excel, and which gives us such powerfully effective cognitive fluidity for navigating our world.
Publishing on Tuesdays didn’t go so great, everyone. So now it’s once a week, willy-nilly.