Misinformation from generative AI images and text
Quick thought on generative AI for images vs text from a misinformation perspective:
Recently, an image of the Pope wearing a puffa jacket went viral. Fine - the man can wear what he wants. But of course, it was AI-generated.
Which made me think about misinformation: any generated image is inherently misleading: if it’s photorealistic, then the implication is that the scene depicted really happened and was photographed honestly. So the attribution matters: an actual photo of the actual Pope shared by an actual photographer is ‘true’ while a generated image is ‘false’.
In contrast, a piece of generated text may or may not be misleading: if the claims made in the text are accurate, it may not matter who wrote the actual words. So attribution doesn’t matter: the world is round whether that’s claimed by ChatGPT or, well, the Pope.