AI doing Georgia O'Keeffe

Have we not learned anything from Jurassic Park?

01 January, 2024

AI doing Georgia O'Keeffe

One of the first things that springs to mind as an artist thinking about AI is plagiarism. I won’t go into this issue too deeply because that’s an essay in and of itself. But I will say this: I think it’s not necessarily AI that’s the problem; it’s how we talk about AI, and how we use it that can be problematic.

The language of AI

There’s this great series called Catfish: The TV Show. I used to watch it religiously, and I think what fascinated me was the idea that people could fall in love with someone they had never met, that usually didn’t even exist. What I learned from Catfish was that language is so powerful it can make a person develop a deeply emotional, one-sided relationship with words.

Enter ChatGPT …

ChatGPT is an interface for a GPT (Generative Pre-trained Transformers). What ChatGPT did was allow people to talk to GPT as if it was human.1 Nir Eisikovits is currently studying how engaging with AI affects a person’s understanding of themselves, and he warns that because using AI feels like we’re having a conversation, we attach human qualities to it and expect human responses.2 Tech philosopher Tom Chatfield suggests the anthropomorphic language we use to describe AI implies it possesses human attributes such as a worldview and a mind.3 Because AI can respond to our questions much like a human would, perhaps we give it more agency than it deserves.

Taking responsibility

In terms of language-based AI models, a person has to type in a prompt such as “A flower painting in the style of Georgia O'Keeffe” for it to generate an image (See image – I’m pretty sure O'Keeffe is safe for now). I think it’s important to ask whether AI is plagiarising by learning from everything humans have uploaded to the internet, or whether the individual is plagiarising by writing a blatantly uncool prompt.

Where is our individual accountability and responsibility to others? Is it OK to pass 100% of the blame onto AI tools, or should we accept individual responsibility for how we use it?

I’ve watched Jurassic Park around 1 million times, and there’s this great scene where the characters are sitting around a table discussing their initial reactions to the park. Dr Ian Malcolm says, “I'll tell you the problem with the scientific power that you're using here. It didn't require any discipline to attain it.” He continues, “You didn't earn the knowledge for yourselves, so you don't take any responsibility for it.” Here’s the kicker, "Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should."

I’m reminded of this scene when thinking about AI.

  1. The Ezra Klein Show. A.I. Could Solve Some of Humanity’s Hardest Problems. It Already Has. From Youtube. Video, 1:28:01. Posted by New York Times Podcasts, July 11, 2023. View video.
  2. Eisikovits, Nir. "AI isn’t close to becoming sentient – the real danger lies in how easily we’re prone to anthropomorphize it." The Conversation. Updated March 16, 2023. View article.
  3. Chatfield, Tom. "AI hallucination." New Philosopher (June 2023): 76. Gale Academic OneFile.