I am beyond excited to be a guest on BBC 4’s Digital Humans radio show this week, broadcasting February 20. The episode is about the tension between creativity and automation in AI art — or as they put it:
Aleks explores why art is so core to some people’s existence, why these Generators have such wide appeal, uncovers the story of a pioneer who grappled with the place of human and machine in art making for decades, and finds out why wonky AI may offer the most opportunity for human imagination to bloom.
Very excited, hope you’ll tune in after Monday.
Sensitive Noise
Sensitive Noise is an animated series of censored AI-generated images from Stable Diffusion. Automated content moderation systems blur images deemed inappropriate, but the mechanism of determining sensitive content is often inscrutable to humans. While collecting a series of images for the prompt “Gaussian Noise” — a prompt which generates abstract images as a result (or representation) of rendering errors, I noticed that many of these images of pure noise were nonetheless flagged as containing sensitive content.
Collecting these images, and further prompting the model with “Gaussian Noise, Human Sensuality,” created a new dataset of censored abstract images. Sensitive Noise compiles these images into a silent video work, with each frame interpolated into the next. It is clear that these images represent nothing (though some suggest more than others) — nonetheless, they are “suggestive” to the machine.
The knowledge that these contain forbidden scenes of human sensuality prompts the viewer into seeking the sensual element of these images. Perhaps in series, they become suggestive: the colors, contours and shapes form a collective visualization of data connected to prompts for sensuality in LAION, the text-image pairing model behind Stable Diffusion. Perhaps they are suggestive simply because we aim to see them through the eyes of the content moderation systems’ categorization.
Sensitive Noise is part of a series of AI generated artworks in which I attempt to work with “Artificial Impossibility.” These are pictures that contemporary AI image generation tools cannot create, either because of technical constraints (such as rendering errors, i.e., human hands or Gaussian noise), content moderation (blurred images), absences in the dataset, or the conceptual limitations of visualization itself.
Notably, many of these impossible images relate to renderings of human experiences: the realm of senses beyond the visual, especially touch, explores the lack of embodied experience within “artificial intelligence” systems.
This Week in Class:
Diffusion: Flowers Blooming Backward into Noise
How does a Diffusion model turn pure noise into an image of flowers in bloom? In this class we will talk about Diffusion models, the technology at the heart of DALLE2, Stable Diffusion and Midjourney. We’ll explore how Diffusion works and how language models steer images into being based on what you write. Then we’ll think about where the artistry lies in this process: is the AI making the art? Is it dreaming or imagining these images? We’ll look at John Searles’ “Chinese Room” thought experiment to think through those questions. Finally, we look at whether AI art is a radical shift in art making, or serves to extend a 60-year-old history of Generative Art from computers.
Artists in today’s lecture include Mezei Leslie, Georg Nees, Robert Mueller and Tim Klein and the 1972 Computer Art exhibition organized by Laxmi Sihara in New Delhi.
I aim to diversify examples of early generative artists for future iterations of this lecture, recommendations are welcome!
Apply for STORY x CODE!
I’m excited to be part of the core planning team behind SubLab & AIxDesign’s STORYxCODE: a cross-over artist residency, research project, and public event program exploring the potential and role of Artificial Intelligence & Machine Learning in storytelling, focusing on animation and filmmaking. For me, it’s about how creatives can navigate these tools in way that center the human side of the vision.
This isn’t just about generating images, but also about algorithmic editing, sound design, co-writing dialogues with GPT text models, character development, object recognition in post-processing, creative coding for editing, and many other ways in which it might show up.
Within the program, we’re pairing 6 Filmmakers & 6 Creative Technologists to work together on story and develop a teaser / trailer while experimenting in the creative process with AI/ML techniques along the way.
If you’re a filmmaker who's curious about AI’s role in filmmaking and animation, we invite you to join us and get up close and personal with algorithmic tools and thinking.
If you’re a creative technologist interested in film and storytelling, we invite you to join us to work alongside a storyteller in playing with and reflecting on creative AI. Apply by February 21!
I am a guy in my office who just walked his dog before sending this. You can verify that by following me on Twitter, Mastodon or Instagram. And as always do feel free to share this newsletter or sign up if it’s been shared with you!
I started your class and I am enjoying it very much!