
Futuristic android with glowing blue circuitry face. By Nejron Photo via Adobe Stock.
Brian Patrick Green is the director of technology ethics at the Markkula Center for Applied Ethics. Views are his own.
“Life is about choices.”
Artificial intelligence (AI) offers us the promise of better decision making, whether implemented as our conversation partner, writer, advisor, data processor, agent, or even automated car.
But what do we mean by “better”?
“Faster” might be one interpretation, and certainly, various LLMs can produce coherent text much faster than a human.
“Without supervision” might be another interpretation, as we expect self-driving cars to drive–ideally–without any human intervention.
“With superior skill” could be yet another interpretation, as we might expect from AI data processing, or advising on various arcane subjects about which we might not have a human expert to consult.
In all of these cases, we take things that we could do and delegate them to an automated system to do for us. This is not unusual from a historical perspective, where, for example, we might have in the past delegated responsibilities to other humans. Is AI any different from this?
When we delegate a job to another person, a human being still knows how to do that job. They can explain what they do and how they do it. But AI is not a human, and AI transparency and explainability cannot be taken for granted. Indeed, the whole point of some AI systems is to do jobs that humans could not otherwise do because the tasks are too huge, such as processing enormous amounts of data.
If AI does the job of a human and that human loses their skill for that task it is called deskilling [1, 2]. If AI acquires a skill that no human has previously had, this enhances human power but it is not deskilling because no human loses out … but a different effect immediately appears instead: dependency. People need the AI for that job or else the job simply cannot be done.
In most cases this tradeoff is probably worth it: gaining whatever capacity the AI is giving at the cost of dependency on the system. But imagine a world where humans are fully deskilled at tasks that we could perform and fully dependent on AI for tasks that we otherwise cannot perform. Such a world operates in an almost magical way to most everyone, except the few architects who are telling the AIs what to do, and even then, they only command the AI in their field. Everything else operates without human comprehension, giving superior results, via processes that are quite mysterious.
If we want to go somewhere, an autonomous vehicle can take us there. If we want to write something, AI will write it for us. If we want to learn about something, AI will teach us.
It all seems very empowering of human freedom and choice. But it is not. It is, in fact, the delegation of these powers in a way which makes our own decision making focus merely on ends and never on means to achieve those ends. We have wants and desires, but we have no way to fulfill them without AI assistance. We would become utterly dependent on AI for everything except our initial wants.
As a tangent, it then becomes extremely important to want the right things or else we will become horribly efficient at causing bad things. This itself warrants a hard look.
But more directly, this product without process takes away our ability to ever achieve ends on our own. It makes us irrevocably dependent upon machines, effectively enslaved to them. In Hegel’s master-slave dialectic, some interpreters (like Alexander Kojeve [3]) note that (contrary to Hegel’s own metaphysical interpretation [4]) the dependency of the master upon the slave is not only psychological, it is physical. The slave is not only enslaved to the master, the master is also effectively enslaved to the slave, because the master cannot achieve anything without the slave doing the actual work.
With AI, we are turning ourselves into enslaved masters. We choose goals, but the means escape us. We have wants but no ability to fulfill them on our own, without aid. We have tossed our freedom, independence, and agency out the window, for the sake of convenience.
And in the warping of our means, we should not expect that our ends will not be warped as well. Through surveillance and recommendations, nudges and addiction, AI can twist our desires, leaving us with nothing left except engineered, instrumentalized ends and means. A free human person reduced to an economic or political tool, a unit of consumption awaiting satiation. This is no way to live.
If human dignity has anything to do with our freedom, then this future world where both ends and means are shackled, where we express wants placed in us by AI, and are unfree to achieve them except through AI, is a future which threatens human dignity. Respecting human dignity requires respecting human freedom. “Voluntarily” choosing this future (and this is not truly voluntary in the sense of informed consent) is no excuse: If we see someone trying to sell themselves into slavery, perhaps not realizing what they are doing, we owe it to them–and to ourselves–to stop them.
Let’s not create a world where the only human choice is to be enslaved to AI. After all, what is human life without the freedom to choose? Technological dependency is one thing–we do need fire, electricity, and so on–but intelligence dependency is another thing entirely. We should not allow AI to become our parent, and we its infants, unable to make our own choices, forever trapped in an immature state, while the “automated adults” of AI take care of all the grown-up work. Responsibility dictates that we force ourselves to grow up and live as adults in the world, even if we can avoid it and stay at home, cared for by AI “nannies.” Such a babied life might seem pleasant, but it is certainly not dignified.
If life is about choices and–in the name of kindness and optimization–we take all of our choices away, then we have taken away life itself. We might be physically alive, but it would not be a dignified and humane way of life, but something less. We can still choose whether to create this future or not, in the choices that we make about AI today. Let us choose wisely.
References
[1] Brian Patrick Green, “Artificial Intelligence, Decision-Making, and Moral Deskilling,” Markkula Center website, March 15, 2029.
[2] Shannon Vallor, “Moral Deskilling and Upskilling in a New Machine Age: Reflections on the Ambiguous Future of Character,” Philosophy of Technology 28 (2015):107–124.
[3] Alexander Kojève, Introduction to the Reading of Hegel, assembled by Raymond Queneau, translation by James H. Nichols, Jr., edited by Allan Bloom. New York: Basic Books Inc (1969).
[4] G.W.F. Hegel, Phenomenology of the Spirit, translation by A. V. Miller, analysis by J. N. Findlay. Oxford: Clarendon Press (1977), p. 112–8.
