Skip to main content
Markkula Center for Applied Ethics

Falling Flat

Stressball with angry expression

Stressball with angry expression

On Communications, Creativity, and Generative AI

Irina Raicu

Irina Raicu is the director of the Internet Ethics program (@IEthics) at the Markkula Center for Applied Ethics. Views are her own.

The following first appeared in The San Francisco Chronicle on June 7,2024, under the title An Apple ad just got a little too real about technology and AI’s impact on us.”

Last [May], an Apple ad misfired. You might not have seen it, because within a day of its release online the company had apologized for it and announced that it would not run it on television.

The ad showed, to the musical accompaniment of “All I Really Need Is You,” a series of musical instruments and other artists’ tools being crushed by a giant hydraulic press (among them a trumpet, a piano, paints, a clay sculpture, a drum set and, weirdly, a stress ball with an emoji face, which looked quite distraught to be destroyed); at the end, the top of the press rose to reveal that they’d all been compressed into the latest version of the “most powerful” and “thinnest” iPad.

On Twitter, the reaction was swift: To many viewers, the ad was a metaphor for the age of generative artificial intelligence. A typical response (from @JoJoesArt) was “This new ad by Apple perfectly depicts what Big Tech has sadly come to stand for: crushing human creativity in the name of technological innovation and selling it to us as progress.”

But it’s not just human creativity that’s getting compressed into something else—or at least not just artistic creativity. Increasingly, the day-to-day communications of those of us who are not artists are also fodder for the generative AI mill.

For example, Reddit announced a partnership with OpenAI, which will allow the AI company to train its models on social media site’s data. In February, Reddit announced a similar agreement with Google. However, as some have pointed out, Reddit content had already been scraped and used to train AI models, including OpenAI ones. Meta is training its own AI models on content that includes public posts by the users of its platforms, and X is training its AI models on public tweets (likely including many of the tweets critiquing the Apple ad).

Our social media posts are part of the huge datasets collected, turned into language “tokens,” recombined via statistical analysis and some human feedback, and then fed back to us in the form of chatbot responses to questions or prompts on ChatGPT and the like. The idiosyncrasies of human writing (which include subject matter and stylistic preferences, grammar errors and simile choices), are flattened like the crunched, melted and squashed textures of the objects in the Apple ad, and turned into something that is indeed very powerful, but also thin and flat—qualities that might sound good in tech tools but not as descriptors of expression.

Social media platforms like Reddit, Facebook and Twitter were introduced as means for people to talk to each other, across great distances and varied social networks. We are just now coming to grips with the fact that our communications on those platforms have become “data” to train the models whose flattened output is then offered to us as more effective (or at least sufficient) ways to communicate. (Some research has shown that services like ChatGPT can improve certain kinds of writing, like short reports—but that should caution us about their use in other contexts in which we don’t want to sound like reports.)

And we were not asked whether we’re OK with such use of our posts. Could it really have surprised the creators of the Apple ad that many people would identify with the googly-eyed stress-ball that gets crushed 45 seconds into the video, rather than with the human hand that holds the technical tool at the end?

The metronome shown in the ad is also technology, as is the turntable; they help people keep time or share music across distance and time, so people value them. The hydraulic press is technology, too. Like AI, it can be very useful when deployed in the right context but otherwise can be harmful to many of the things we value—like authentic, personal communication.

Photo: Stressball by Pat Pillon, used under a Creative Commons license.

Jul 23, 2024
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: