Skip to main content
Markkula Center for Applied Ethics

On Recommendation Algorithms and What Makes Us Boring

metal arches in a tunnel

metal arches in a tunnel

Constrained Choices and Compliance

Irina Raicu

Irina Raicu is the director of the Internet Ethics program (@IEthics) at the Markkula Center for Applied Ethics. Views are her own.

Wired magazine’s “spiritual advice columnist,” Meghan O’Ghieblyn, recently replied to a reader who noted that a streaming music app is “scarily good at predicting songs” that the reader would like and asked, “Does that make me boring?”

O’Ghieblyn redefined the question: “I'm willing to bet,” she wrote, “that your real anxiety is not that you're boring but that you're not truly free. If your taste can be so easily inferred from your listening history and the data streams of ‘users like you’ (to borrow the patronizing argot of prediction engines), are you actually making a choice?”

Later in the column, however, she noted that users of services that include recommender algorithms, like the questioner, do make choices—but choices that are themselves shaped by the algorithms:

On TikTok, we quickly scroll past posts that don't reflect our dominant interests, lest the all-seeing algorithm mistake our curiosity for invested interest. Perhaps you have paused, once or twice, before watching a Netflix film that diverges from your usual taste, or hesitated before Googling a religious question, lest it take you for a true believer and skew your future search results.

These are choices born of restrictions. They are efforts to mollify the algorithm’s rigid and itself limited perspective, lest it go from just overly simplifying to being outright wrong about our interests.

For the subset of users who understand the impact of the algorithms enough to try to appease them, the answer, then, is yes—that makes one boring; but the “that” is not the users’ predictability but the compliance with the algorithms. In this scenario, it’s the user who’s been trained by the algorithm—not the other way around. The user acquiesces to being a shell of his or her “dominant interests,” concerned about the consequences of trying (or learning about) new things.

While fully acknowledging this reality, O’Ghieblyn writes that she doesn’t “advise embracing the irrational or acting against your own interests” as a response. “It will not make you happy,” she argues, “nor will it prove a point.” Disagreeing with this view, in a different take on recommendation algorithms, Clive Thompson (who also often writes for Wired) argues for “rewilding your attention”; he claims that acting out against the algorithms is not, in fact, against one’s own real interests. As he puts it,

our truly quirky dimensions are never really grasped by these recommendation algorithms…. They’re not wrong about us; but they’re woefully incomplete. This is why I always get a slightly flattened feeling when I behold my feed, robotically unloading boxes of content from the same monotonous conveyor-belt of recommendations, catered to some imaginary marketing version of my identity. It’s like checking my reflection in the mirror and seeing stock-photo imagery.

In other words, to offer a different answer to the Wired questioner who asked about the music app, the recommendation algorithms do make you boring—and static—if you allow them to do all the work of finding music or other “content” for you.

To break that false mirror that Thompson mentions is therefore not to “embrace the irrational” but to try to embrace your full self. In his post, Thompson offers a variety of suggestions for how one might go about “rewilding” one’s imagination in an age of recommendation algorithms—while acknowledging that this requires more effort on our part.

If your music app’s recommendations are too accurate, you might not be boring but just stuck in a rut—perhaps in need of a reminder that might come (serendipitously) from a tweet by a bot that quotes prof. Richard Feynman: “You are under no obligation to remain the same person you were a year ago, a month ago, or even a day ago. You are here to create yourself, continuously.”

It’s important, also, to note that recommendation algorithms used in the context of, say, music streaming apps have very different societal impacts than those used in social media feeds or in news media outlets. The latter categories of recommenders have been accused of being partly responsible for increased social polarization, filter bubbles that impede understanding, radicalization, and other significant negative consequences that go far beyond making us “boring.”

It seems only fitting to conclude this post with a couple of recommendations for further reading:

- a useful analysis and taxonomy published in 2020 by Silvia Milano, Mariarosaria Taddeo, and Luciano Floridi, titled “Recommender Systems and Their Ethical Challenges”

- a blog post by Claire Leibowicz, Connie Moon Sehat, Adriana Stephan, and Jonathan Stray, about research conducted by the Partnership on AI, titled “If We Want Platforms to Think Beyond Engagement, We Have to Know What We Want Instead”

 

Image: "monotony", by Jake Blues Photography, is licensed under CC BY-NC 2.0

Dec 14, 2021
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: