Skip to main content
Markkula Center for Applied Ethics

Media Mentions


A selection of articles, op-eds, TV segments, and other media featuring Ethics Center staff and programs.

The Markkula Center for Applied Ethics does not advocate for any product, company, or organization. Our engagements are intended to provide training, customized materials, and other resources. The Markkula Center does not offer certifications or seals of approval.

Scientific American Logo
A Brief History of Automatons That Were Actually People

Astra Taylor calls human labor hidden under the veneer of a robot or AI tool, "fauxtomation."

"This phenomenon is nicknamed “fauxtomation” because it “hides the human work and also falsely inflates the value of the ‘automated’ solution,” says Irina Raicu, director of the Internet Ethics program.

“This is not just a question of marketing appeal,” Raicu says. “It’s also a reflection of the current push to bring things to market before they actually work as intended or advertised. Some companies seem to view the ‘humans inside the machine’ as an interim step while the automation solution improves.”

Irina Raicu, director, internet ethics, quoted by Scientific American.

techtarget logo
Beyond AI Doomerism: Navigating Hype vs. Reality in AI Risk

As AI becomes increasingly widespread, viewpoints featuring both sensationalism and real concern are shaping discussions about the technology and its implications for the future.

"We're all pursuing the same thing, which is that we want AI to be used for good and we want it to benefit people," said Brian Green, director of technology ethics

Brian Green, director, technology ethics, quoted by TechTarget.

The Daily Upside Logo
Intel Filing Could Diversify Deepfake Detection Models

Green noted, this tech includes a system that labels images by race. However, race and ethnicity aren’t always easily detectable by just looking at an image, Green noted. “It’s a simplification of human diversity that could be ethically problematic.” 

“If AI in general gets this bad name because of deep fakes or other unethical behavior, then that could perhaps cause a backlash that would go all the way back to the chip industry,” Green said.

Brian Green, director, technology ethics, quoted by The Daily Upside.

 

Lifewire logo
This Always-Recording AI Microphone Will Make Your Coworkers Hate You

From the same company that brought you Rewind to record everything on your computer comes the Limitless AI microphone which will record all the audio you hear and process it using AI.

"The privacy concerns raised by any non-obvious recording device might not be limitless, but they're pretty vast. In this case, the fact that there's a feature called 'consent mode' for new voices that would be recorded, but that mode (according to media reports) is off by default, is a troubling signal about respect for privacy. We already live in a world in which people distrust so much of the technology around them; in order to build trust, privacy, at least, needs to be the default in design."

 

Irina Raicu, director, internet ethics, quoted by Lifewire.

San Francisco Chronicle Logo
Remember the Fight Over Net Neutrality? Biden’s FCC Chair Wants to Bring it Back

Chase DiFeliciantonio, reporting for the San Francisco Chronicle, addresses net neutrality — a long-debated policy that was solidified under President Barack Obama and reversed under President Donald Trump, required Internet service providers to treat all communications on their networks the same regardless of content. The reversal of net neutrality led broadband companies to a model that provides more robust service to those willing to pay more for it.

"Industry usually prefers to have one set of rules and uniformity and opposes patchwork laws," said Irina Raicu, director of the Internet Ethics program at the Markkula Center for Applied Ethics at Santa Clara University. But internet providers "opposed federal regulation and ended up with a patchwork of laws."

Irina Raicu, director, Internet Ethics, quoted by the San Francisco Chronicle, and republished by Government Technology.

The Economic Times Logo
Welcome to a Future Where the Power is in the Palm of Your Hand!

As palm-recognition technology gains traction with Amazon and other companies, ethical concerns surrounding it's use are surfacing and raises questions about potential misuse and the need for robust safeguards.

Brian Green, director of technology ethics highlights the dual nature of data as both an asset and a liability: "Data is not only an asset, but also a liability for those holding it."

“Combining biometric data with big data, AI, DNA, surveillance cameras, and geolocation data could lead to advanced profiling of individuals, raising further ethical concerns,” he adds.

Brian Green, director, technology ethics, quoted by The Economic Times.

Forbes Logo: A white capital
What To Take Away From NBC’s Hire And Fire Of Ronna McDaniel

NBC News parted ways with former Republican National Committee chair Ronna McDaniel four days after hiring her.

"For journalism to serve democracy better, the standards of the former must not become incompatible with the essence of the latter. We need a minimum qualifying standard to apply to broadcast roles that have narrative-setting power, one that news organizations can apply to everyone hired to deliver news on air."

Subramaniam Vincent, director, journalism and media ethics, published by Forbes.

Techopedia Logo
Big Tech Offers to Watermark AI Content — Can AI-Generated Misinformation Be Stopped?

Meta recently announced their plans to use watermark technology to distinguish AI-generated content across Facebook, Threads, and Instagram.

According to Irina Raicu, director, internet ethics, “One challenge with watermarking is that people might come to rely on it too much, not realizing that watermarks can be removed — and believing, therefore, that content that’s not watermarked is not AI-generated.”

Irina Raicu, director, internet ethics, quoted by Techopedia.

The Daily Upside Logo
Zoom may use AI to Tone Down Accents

Though this patent seems like a well-intentioned way of easing cross-cultural communication, there are certainly ways that this tech could go awry, said Brian P. Green, director of technology ethics. 

For starters, depending on the language, there may be bias in how well it translates one accent versus another, said Green. For example, the company’s tech may work far better with American English than with other accents or languages due to data availability.

But even if this tool is trained with ethical data privacy practices and is bias-free, it could still reinforce “value judgments” for certain accents, said Green. 

Brian Green, director, technology ethics, quoted by The Daily Upside.

An cartoonish orange snake-like character from the online video game Slither.
Online Video Game Slither.io is Letting Antisemitism Spoil the fun

Slither.io is a multiplayer video game that has been downloaded more than 500 million times to date and contains player created usernames on the game that attack people of color and other minorities.

“Allowing [antisemitic] behavior is hurtful not only to the players in the game who may feel targeted by slurs, but also to anyone who hears about this behavior,” said Brian Green, director of technology ethics. “Even those using this language are ultimately harmed, as they turn themselves into worse people, who become callous to the concerns of others (this applies to the game designers as well).”

Brian Green, director, technology ethics, quoted by the Jewish Telegraphic Agency.

  • More pages:
RSS