12 Comments
Oct 5, 2022Liked by Eryk Salvaggio

Note - this article makes the assumption that Dall-E 2 uses the LAION 5B dataset and that therefore the base dataset is searchable.

To my knowledge this is not the case - the Dall-E 2 dataset is internal to OpenAI and has not been publicly divulged. LAION is independent of that.

The other pitfall is that using something like https://haveibeentrained.com likely uses CLIP embeddings (though they give very little info) to identify and pick images, i.e. another image recognition model with its own biases. A different prompt can surface more 'real' kisses, such as "phot of a kiss".

A lot of this is valid, but I think an oversimplification - because you really have the interaction of multiple AI-generated models, one that interprets the semantic content of your phrase, and another one that turns that semantic content into an image.

Because 'humans kissing' is a much colder and more technical description of the image, you get much more awkward kisses out of that. Normal people's intimate photographs will not have been labelled 'humans kissing', so you're more likely to get those simply asking for 'photo of a kiss' or similar - likewise in Dall-E, you will get vastly different vibes between those two prompts.

Interestingly 'photo of a kiss between two women' does not trigger content warnings for me either, and is generated without issue, which says a lot as to the vastness of semantic difference between 'kissing' and 'a kiss', as well as the clinical usage of 'humans' versus omitting that altogether (what else is Dall-E gonna do, show us chimps kissing?)

Expand full comment
Oct 2, 2022Liked by Eryk Salvaggio

This is a terrific post, and I'm going to add it to my Theory of Knowledge unit on what AI can teach us about how technology shapes our knowledge. Thank you.

Expand full comment

A clarification, please!

In "1. Create a Sample Set" the method is unclear. You generate a bunch of pictures, ok, I am with you. Then it sounds like you pick out "notable" images manually from a larger pile of generated pictures, and use that "notable" set as the sample you perform the first stage of analysis on? Or have I got this wrong?

Expand full comment

I tried the same on an uncensored version of Stable Diffusion...

Curious to know what you think about it

https://olivierauber.medium.com/le-baiser-artificiel-f2c60f48926b

Expand full comment

And sometimes you an image just had a random idea.

Expand full comment