Skip to main content
Markkula Center for Applied Ethics

On Artificial Intelligence and the Public Good

Ethics

Ethics

Responding to a Request for Information from the Office of Science and Technology Policy

Shannon Vallor

Recently, the federal office of Science and Technology Policy issued a request for public feedback on “overarching questions in [Artificial Intelligence], including AI research and the tools, technologies, and training that are needed to answer these questions.” As the request notes,

the Federal Government is working to leverage AI for public good and to aid in promoting more effective government. OSTP is in the process of co-hosting four public workshops in 2016 on topics in AI in order to spur public dialogue on these topics and to identify challenges and opportunities related to this emerging technology. These topics include the legal and governance issues for AI, AI for public good, safety and control for AI, and the social and economic implications of AI.

The Request for Information lists 10 specific topics on which the government would appreciate feedback, including “the use of AI for public good” and “the most pressing, fundamental questions in AI research, common to most or all scientific fields.”

One of the academics who answered the request for information is Shannon Vallor, who is the William J. Rewak Professor at Santa Clara University, and one of the Markkula Center for Applied Ethics’ faculty scholars. Vallor is the current Chair of SCU’s Philosophy department; she is also an executive board member of the Foundation for Responsible Robotics (responsiblerobotics.org), President of the Society for Philosophy and Technology (spt.org), and a member of the Global Initiative for Ethical Considerations in the Design of Autonomous Systems.

Here are the comments that she submitted:

As a scholar who has advised many tech leaders, policymakers, computer scientists, roboticists and software engineers about the ethical implications of emerging technologies, including AI, I find it striking and concerning that the OSTP announcement makes no mention of the importance of AI research ethics or an increased understanding among policymakers of the ethical dimensions of AI research, development, and implementation. This is a significant blind spot that must be remedied; vague references to the 'public good' and public 'benefit' [which appear in #2 and #7 on the list of topics referenced above] are insufficient to reflect this need.

Many international and interdisciplinary bodies are already forming to address this concern—for example, the IEEE Standards Association's Global Initiative for Ethical Considerations in the Design of Autonomous Systems, of which I am a member.

The ethical dimensions of AI go far beyond, and are too often occluded by, the highly speculative fears being stoked by Hawking, Musk, Gates, and others about 'superintelligent' AI overtaking humans. Most AI researchers know that the ethical issues that require our immediate attention and action are far more concrete and near-term:

1. Appropriate and effective human oversight of AI systems, especially those with a direct role in matters of human life and death or public safety (e.g. driverless cars, AI diagnosticians such as Memorial Sloan-Kettering's implementation of IBM Watson, and lethal robots).

2. Moral transparency of AI mechanisms and decision processes, especially where opaque biases in AI algorithms and training data may lead to unjust outcomes or policies in predictive policing, lending, education, housing, health care and employment, to name just a few likely sectors.

3. The large-scale effects of AI and associated automation on human labor, social security and stability, and economic inequality.

4. The effects of AI systems and increasing automation of higher-order tasks on the intellectual and moral skills of human agents.

5. The moral effect of AI on human emotions, sociality, relationship bonding, public discourse, and civic character; for example, the likelihood of humans forming robust emotional attachments to AI systems that simulate human emotional responses, and the high potential for emotional/psychological manipulation and commercial exploitation of human agents by AI systems.

6. The ethical risks of 'automation bias,' in which humans have been shown to vastly overestimate and rely unduly upon the intelligence and capabilities of autonomous systems, often on the basis of very thin behavioral similarities with humans.

These are only a few of the ethical issues that require the OSTP to devote significant attention and research funding if the use of AI for 'public good' is to become a reality rather than an empty promise.

The OSTP should consider how it can more directly encourage and support the already expanding interdisciplinary efforts of AI researchers and ethicists to collaborate on responsible AI design, manufacture and use; for example, research grants that fund:

a) direct research on AI ethics

b) studies seeking good models for successful working collaborations between AI researchers and ethicists

c) effective and well-integrated educational programs on AI ethics at all levels and across disciplines

d) educational training on AI ethics for regulators, policymakers and other relevant stakeholders.

The ethical dimensions of AI research will very quickly dwarf in public importance even the ethical issues long recognized as central to biomedicine, since AI systems will soon be integrated into virtually every human institution and practice (medicine being just one). The OSTP would be well-served to explicitly recognize and support efforts to catch up to this growing need.

According to the “Notice of Request for Information,” responses may be submitted online; the deadline for submissions is July 22: https://www.federalregister.gov/articles/2016/06/27/2016-15082/request-for-information-on-artificial-intelligence

Photo by Dan Mason, used without modification under a Creative Commons license.

Jul 19, 2016
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: