Skip to main content
Markkula Center for Applied Ethics

Ethics Center Collaborates with Global Universities on ‘New Humanism’

A silhouette of human head with a question mark.

A silhouette of human head with a question mark.

The Markkula Center for Applied Ethics is participating in an effort to better understand how neuroscience and artificial intelligence (AI) will affect what it means to be human. New Humanism at the Time of Neuroscience and AI is a project being led by the Catholic University of Lyon, France, as part of a ten-university consortium.

NHNAI has four main components:

  1. Develop a list of the most pressing questions to discuss with various communities of stakeholders and from these discussions, gather academic knowledge;
  2. Share the list of pressing questions for contributions and insights during further discussions with stakeholder communities through face-to-face workshops and online debates;
  3. Collect results and share them publicly, through recommendations and whitepapers, with the intention of informing ethics and policy-making efforts;
  4. Determine ways to continue this capacity-building mission in the future.

The project is looking at how humanism (perhaps better thought of as the humanities and what it means to be human), neuroscience, and artificial intelligence directly affect education, democracy, and health care.

The Markkula Center for Applied Ethics is a key part of this effort, with Director of Technology Ethics Brian Green acting as the thematic coordinator for AI ethics and Thor Wasbotten, the Center’s managing director, serving as project coordinator. To further all of the above goals, the Markkula Center has started a series of conversations with stakeholders to discuss some of the pressing questions and to begin to find ways to benefit all of humanity.

By distilling the wisdom from these discussions, we hope to learn more about social sentiment on these issues and how some of these concerns might be addressed. This project is not only about “products” and “solutions,” however. It is also about the people involved themselves. As a project meant to ask what it means to be human, it must also engage with humanity itself, in the form of the actual people who are involved in the project. Even more than any “products” that might come from this investigation, the development of talent and skill among the practitioners and conversation partners in the NHNAI project will be a long-term gain for those thinking about humanity – and living it – in this new technological age.

This four-year project began in 2022 and will run through 2025.

Here are some of the key takeaways in our initial discussions in education, democracy, and health care:

EDUCATION

  • There has been discussion whether education should aim to produce individuals who can contribute to the economy by providing basic literacy, numeracy, and critical thinking skills vs. the importance of merely obtaining a degree vs. preparing people to become lifelong learners. It seems that AI could change the way certain skills are taught, but education's primary purpose remains to prepare individuals for lifelong learning and societal engagement.
  • The education system's flaws were discussed, such as the idea that education has become transactional. The integration of AI in education exposes the cracks in the system, and its impact on scoring and grading for jobs, credit score, car, and house. The discussion considered the need to focus on process and teaching students how to produce things, not just the outcome.
  • AI should enhance the learning experience, not just complete tasks or assignments in a way that short-circuits the learning process, and that AI will challenge educators to rethink their teaching methods and anticipate the impact of AI technology on future careers.

DEMOCRACY

  • AI's impact on election communications, including personalizing messages to individuals to persuade them to vote for a specific candidate, as well as the potential of generative AI to create propaganda and misinformation.
  • Advanced generative AIs may limit democracy due to tech companies' federating their solutions, thereby restricting citizens' access to diverse information.
  • Might AI enhance democracy by shaping and changing governance processes, institutions, actors, and outcomes, while considering its impact on different aspects of democratic governance?

HEALTH CARE

  • Human connections should not be overlooked in health care, and AI-powered algorithms could lead to ethical concerns and inaccurate diagnoses/treatment plans.
  • Care for patients cannot be fully automated, and humans still owe something to the patient even when no further medical interventions are viable.
  • The concept of adjustable autonomy in AI systems, where they recognize when their model is insufficient and hand over the task to a human. 

New Humanism at the time of Neurosciences and Artificial Intelligence (NHNAI) Logo

 

 

Apr 27, 2023
--

Subscribe

* indicates required
Subscribe me to the following blogs: