Thanks for sharing this talk here, I enjoy your point of view a lot and appreciate how you consider AI critically without dismissing it entirely as an artist.
The point about images of child abuse is definitely one I need to mull over. There's a fair bit of discussion in writing spheres about depictions of trauma/abuse that reminds me of this. One of the points is that agency and ownership are extremely important in preventing exploitation of stories of abuse. For example, questioning if there is a meaningful difference between a survivor sharing their story to raise awareness, or depictions of trauma by someone who hasn't experienced it for shock entertainment value (discussion around the treatment of women in Game of Thrones for example). This ties in neatly to what you're suggesting about the human element and context in archives. What does it mean for these images to be included in databases? What does it mean to generate this kind of content from an AI database? How do we feel about this as a society?
I sadly don't really have an argument to make here, just wanted to say it's an extremely difficult topic I try to be mindful of. It's great when these tricky ethical questions are brought up, so thanks for your open mind and honesty.
Well said. Only caveat is that the people loop needs to be balanced. We do not need pictures of George Washington as a black person or refusal to generate true images of people without bias embedded in the algorithms. This is why even the human loop needs to be vetted.
That is a nice speech. However, I continue to bother you by emphasizing that images of abuse can be created even without abuse images in the dataset, and that the existence of these images is not just a problem because they are in a dataset, but because they exist and are online (indeed, if they had not been in an open dataset, we would not have noticed). As for the use of a public good (data) to promote a private benefit, it is undoubtedly a problem. More than the roof of copyright, which would allow only those with substantial financial means to build datasets, perhaps we should impose open source on those who use our data, and maybe even some tax for the common good... Very true what you say about the responsibility of our archives, I consider it a very important point.
There’s no way you can read that speech and have your takeaway be that I’m ok with child abuse and with its images being online — but not in the training data. Your point is in such bad faith that it must be deliberate.
You don’t need to read my newsletter if you’re going to constantly condescend to me for holding positions I don’t hold.
It wasn't at all what I meant, perhaps I expressed myself poorly. What I meant to say is that the real problem is that those images exist and are online, not just that they are present in the dataset - that's certainly a problem, but not so much because they can be reproduced with software, but because it indeed testifies to their existence and availability online. The data used by AI are (also) a mirror of our horror, rather than an aggravating factor. It's simply impossible to think that you are okay with their presence online! Maybe I explained myself badly, I apologize because I never thought that you were advocating what you said.
Thanks for sharing this talk here, I enjoy your point of view a lot and appreciate how you consider AI critically without dismissing it entirely as an artist.
The point about images of child abuse is definitely one I need to mull over. There's a fair bit of discussion in writing spheres about depictions of trauma/abuse that reminds me of this. One of the points is that agency and ownership are extremely important in preventing exploitation of stories of abuse. For example, questioning if there is a meaningful difference between a survivor sharing their story to raise awareness, or depictions of trauma by someone who hasn't experienced it for shock entertainment value (discussion around the treatment of women in Game of Thrones for example). This ties in neatly to what you're suggesting about the human element and context in archives. What does it mean for these images to be included in databases? What does it mean to generate this kind of content from an AI database? How do we feel about this as a society?
I sadly don't really have an argument to make here, just wanted to say it's an extremely difficult topic I try to be mindful of. It's great when these tricky ethical questions are brought up, so thanks for your open mind and honesty.
Well said. Only caveat is that the people loop needs to be balanced. We do not need pictures of George Washington as a black person or refusal to generate true images of people without bias embedded in the algorithms. This is why even the human loop needs to be vetted.
That is a nice speech. However, I continue to bother you by emphasizing that images of abuse can be created even without abuse images in the dataset, and that the existence of these images is not just a problem because they are in a dataset, but because they exist and are online (indeed, if they had not been in an open dataset, we would not have noticed). As for the use of a public good (data) to promote a private benefit, it is undoubtedly a problem. More than the roof of copyright, which would allow only those with substantial financial means to build datasets, perhaps we should impose open source on those who use our data, and maybe even some tax for the common good... Very true what you say about the responsibility of our archives, I consider it a very important point.
There’s no way you can read that speech and have your takeaway be that I’m ok with child abuse and with its images being online — but not in the training data. Your point is in such bad faith that it must be deliberate.
You don’t need to read my newsletter if you’re going to constantly condescend to me for holding positions I don’t hold.
It wasn't at all what I meant, perhaps I expressed myself poorly. What I meant to say is that the real problem is that those images exist and are online, not just that they are present in the dataset - that's certainly a problem, but not so much because they can be reproduced with software, but because it indeed testifies to their existence and availability online. The data used by AI are (also) a mirror of our horror, rather than an aggravating factor. It's simply impossible to think that you are okay with their presence online! Maybe I explained myself badly, I apologize because I never thought that you were advocating what you said.