Skip to main content
Markkula Center for Applied Ethics

False Dilemmas

Photo of rocks balanced on each other on a beach

Photo of rocks balanced on each other on a beach

Technology Ethics, Law, and Fairness in AI

Irina Raicu

Irina Raicu is the director of the Internet Ethics program at the Markkula Center for Applied Ethics at Santa Clara University.  Views are her own.

You might have noticed that 2018 has been a banner year for exposing the negative impacts of a lot of tech products and services that had previously been presented as a largely positive force in society. 2018 had many of us discussing Facebook and Cambridge Analytica, terror in Myanmar, discriminatory ads, and more; Google and Dragonfly, Maven, Duplex, and data breaches; Amazon and Rekognition and the implications of Alexa in our homes; Microsoft and Salesforce’s controversial contracts with various agencies of the U.S. government; WhatsApp and its role in violence in India and in the Brazilian presidential election; Twitter and the amplification of misinformation and polarization; Instagram and bullying; YouTube and radicalization; etc. (Meanwhile, Palantir continues to do what it does, with limited media scrutiny.)

It has also been a banner year for more wide-spread conversations about technology ethics. Those conversations, and the subsequent work to develop ethical principles for product development and introduce more ethics training in academic tech curricula and in the workplace, have been encouraging. The conversations, however, occasionally engender false dilemmas. One of those pits ethics against law and regulations.

Earlier in the year, various commentators expressed a concern that, as professor Ben Wagner put it in one article, “much of the debate about ethics seems increasingly focused on private companies avoiding regulation. … In this world, ‘ethics’ is the new ‘industry self-regulation.” He went on to add,

This thinking of course does not do justice to many of the proponents of ethical approaches to technology who think long and hard about more ethical frameworks for technology development. But it is indicative of the increasingly common role of technology ethics in political debates, where ethical approaches are seen as a substitute for stricter regulatory approaches.

That last claim might have a particular geographical locus; in the U.S., at least, the growing consensus seems to be that implementation of tech ethics and new laws and regulations are both urgently needed. (I, at least, haven’t heard any companies publicly trying to make the case that they should be spared regulation because they’ve strengthened the ethics component of their internal processes.)

Nonetheless, researchers like Kate Crawford have also questioned the renewed interest in tech ethics; in July, she tweeted, “What if 'tech ethics' is a smokescreen unless it contends with the bigger issues of concentrated power, governance models, due process, and public accountability?” (Her comment was retweeted 318 times and got more than a thousand likes.) Of course, that question is a call for a broader scope of ethical analysis, not a rejection of ethics; nonetheless, it challenges the kind of “tech ethics” work that looks in particular at the design, development, and deployment of new products (one kind of effort that we at the Ethics Center have been involved in—though not the only prism through which we look at the ethics of technology.)

Calling, as Crawford does, for a broader view of tech ethics, or, as Wagner does, for “common criteria based on which the quality of [ethical] commitments made can be evaluated” is important.  But ethical analysis can happen simultaneously on a variety of planes—and in parallel with the creation of new laws. The ethics of tech design and deployment is not a smokescreen or a fig leaf; it is a part of a broader effort to create technology that contributes to, rather than diminishes, our well being.

Another false dilemma has pitted efforts to address unfairness in algorithmic decision-making against efforts to address other (or broader) questions about artificial intelligence ethics. Earlier this month, an article brought into sharp focus claims that had been made for a while: It was titled “The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence.” In it, Julia Powles and Helen Nissenbaum argue that

Serious thinkers in academia and business have swarmed to the A.I. bias problem, eager to tweak and improve the data and algorithms that drive artificial intelligence. They’ve latched onto fairness as the objective, obsessing over competing constructs of the term that can be rendered in measurable, mathematical form. If the hunt for a science of computational fairness was restricted to engineers, it would be one thing. But given our contemporary exaltation and deference to technologists, it has limited the entire imagination of ethics, law, and the media as well.

That latter part is disproved by media coverage of various issues having to do with AI and labor or AI and invasions of privacy or AI in the context of content moderation by social media companies. It is also disproved, for example, by a recent offer of funding from the National Science Foundation and the Partnership on AI (disclosure: the Ethics Center is a member of PAI). The “Dear Colleague Letter” from NSF offers support for

high-risk, high-reward research at the intersection of the social and technical dimensions of AI. …. Proposals may expand understanding of the influences of AI on people and society or contribute technical innovations that overcome the emerging social challenges. Topics include, but are not limited to:

  • Safety, robustness, and accountability of AI systems;
  • Bias and fairness of AI systems;
  • Intelligibility, explanation, and transparency of AI inferences;
  • Privacy challenges with AI development and use;
  • Sociotechnical challenges involving ethical considerations;
  • Economic impacts of AI on society; and
  • Social consequences of AI system deployments.

Bias and fairness questions are one of seven topics; they do not encompass the “entire imagination” of people working on issues related to AI.

In their article, Powles and Nisenbaum also describe the “nature of the AI bias debate” as “inherently compromised” and write,

… it should not surprise us that many prominent voices sounding the alarm on bias do so with blessing and support from the likes of Facebook, Microsoft, Alphabet, Amazon, and Apple. These convenient critics spotlight important questions, but they also suck attention from longer-term challenges. The endgame is always to ‘fix’ A.I. systems, never to use a different system or no system at all.

Given the current reality, in which biased algorithms have been and continue to be implemented, at scale, in a variety of important contexts (criminal justice, welfare benefits, hiring, etc.), describing efforts to minimize the harm inflicted by such tools as a “diversion” that “sucks attention from longer-term challenges” is particularly striking. It’s akin to watching houses in flames and arguing that firefighters should not be working to put them out, because we should be focusing, instead, on broader issues such as urban planning, the effects of global warming, and wildland management.

We need to do both. To work on reducing bias in algorithmic tools is not to accept that those tools should be deployed in particular circumstances. There are, in fact, people who do both: argue that such tools should not be used in some areas of life, and work to make those tools less damaging. This is important work that should be celebrated, not dismissed.

We need many people, working on many levels and through various means, to make sure that AI and other technologies are developed and deployed ethically.

In each of the debates mentioned above, there are well-intentioned, well-informed people on either side (and, of course, at various points in between). But neither of those topics warrant an either/or approach. In fact, in each of those contexts, we need to push back against the narratives of false dilemmas.

Photo by Viewminder, cropped, used under a Creative Commons license.

Dec 14, 2018
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: