Tool or Tyrant: Will AI Enhance or Erode Human Autonomy?
Remarks given by Cosmos Chair Brendan McCord at the "Lyceum Project," a collaboration between Stanford and Oxford hosted at Aristotle's old school
Thank you to Professors Ober at Stanford and Tasioulas at Oxford for gathering us here today, and for Prime Minister Mitsotakis for his participation. The spirit of the Lyceum Project and the sublime sobriety of Aristotle are essential and all too rare in today’s world.
Will AI weaken human autonomy through the influence of disempowering, centralizing forces?
I will argue that the soundest approach to AI philosophy blends a broad Aristotelian ethical strategy–one that defines and defends (normatively) a classically rooted conception of human vitality–with a renewed defense of liberal freedoms and institutions, especially human autonomy. By “human autonomy” I mean the state of being free in one’s mind and in one’s actions—the advancement of inner freedom on the one hand and the protection of an individual’s independence from external coercion on the other.
Why does liberal freedom need an Aristotelian corrective? Most pressingly, to prevent it from sliding into a centralizing ethos that would reduce human beings into enervated, disempowered agents whose lives are characterized by passive existence under a tutelary state or the passive satisfaction of algorithmically seeded preferences.
And why does Aristotle need a liberal corrective? For all their wisdom, both Aristotle and our hosts mischaracterize a market order that brings about tremendous equipment and support for flourishing. One that liberates individuals from coercion, privation, drudgery, sickness, and death. That enables modern medicine, near unlimited access to creative works, and tremendous growth in leisure. Leisure of the kind that can be used for many things, but plausibly for politics or contemplation as well.
While Aristotle believed a virtuous order required a hierarchy of purpose embodied in and enforced by law or regulation, this account is incomplete in telling the story of how these marvels of the modern age—this equipment for flourishing—has been achieved.
We must preserve and extend these endowments of liberalism, while returning to Aristotle to help liberalism frame a conception of human vitality and autonomy that resists pessimism or passivity in the face of technology.
And this is my main fear for AI and AI governance: Hard despotism was the greatest evil of the 20th century. Will soft despotism–a weakening of human autonomy through the influence of disempowering, centralizing forces–be the defining threat of the 21st?
AI as an intelligent tool
Professors Ober and Tasioulas call AI an “intelligent tool,” and Aristotle would agree. Aristotle tells us that while tools cannot grant us a good human life, a good human life is tool-dependent, and therein lies a risk. Let me explain.
The Nicomachean Ethics distinguishes goods into two categories: Goods we pursue for their own sake, and goods we pursue for their utility, often as means to goods of the first category.
We humans make these distinctions and form our means-end hierarchy spontaneously, somewhat unthinkingly. And we do so first from ordinary experience and common opinion. Similarly, Aristotelian vitalism need not depend on metaphysical notions such as “natural” teleology, but on imminent, ordinary experience and its forms of “conditional necessity.”
For Aristotle, clarifying the right “means-end” relations is extremely useful to our pursuit of happiness. We naturally orient ourselves by goods we presume are choice-worthy for their own sake. So, what are those?
Intrinsic goods, the “nodes” of happiness, include activities and experiences satisfying in their own right, including those that exercise the social, rational, poetic, and bodily energies that make us human beings. These are what we need most for a flourishing life.
But unfortunately, AI can’t simply grant these things to us. No technological overseer or technological substitute ever can. AI can be a tool for our empowerment, but ultimately, it’s up to us to strengthen and employ the energetic use of our individual capacities. We need to elevate and defend those activities that are intrinsically good (and the substantive and generous sphere of liberty that undergirds them).
But tools are also essential for attaining the lowest, like safety, food, and shelter. The material and structural mechanisms we use to supply these basic needs and, thus, to enable our higher pursuits, are also tools.
Aristotle thus concedes that a good human life is both aided by tools and dependent on tools. And this tool dependence carries certain intrinsic risks and vulnerabilities.
The risks and vulnerabilities of tool dependence
For a variety of reasons, both Aristotle and Plato were wary of extreme tech optimism (see Aristotle’s critique of Hippodamus in the Politics, for example), not least because, as I will explain soon, it is easy for humans to foist their hopes onto techne and thereby distort the instrumental character of their tools.
But to say that tools are instruments is not to say that their value is neutral and user dependent; far from it. Again, using tools creates dependence, and dependence creates vulnerability. As Plato shows in Phaedrus, even the use of writing entails inevitable tradeoffs, like a weakened memory and superficial understanding. The automated toilet flusher does as well, insofar as it removes the tiniest act of virtue.
This is part of why both Plato and Aristotle thought that innovation should be balanced with tradition, especially those traditions that cities or civilizations organically develop to manage the trade-offs with means-ends relations that tech-dependence entails. In recognizing the value of traditions oriented toward human flourishing, Plato and Aristotle implicitly acknowledge the importance of a spontaneous order that supports moral and intellectual excellence, which cannot be achieved through top-down legislative fiat or centralized control.
Liberalism, unlike ancient thought, began its life as a pro-science, pro-tech political philosophy, absent certain protections that might have satisfied Plato and Aristotle. Enlightenment optimism, and especially its commitment to the abstract ideas of the rule of law and equal natural rights, wherein individuals can freely express what Adam Smith called our natural propensity to “truck, barter, and exchange,” led to astonishing progress in science and technology, radical generation, dispersal, and use of knowledge in society, and unprecedented benefits for the nourishment and support of the human good. To flatten markets to mere “wealth maximizers,” “preference satisfiers,” or outlets for “unjust acquisition” misses these effects entirely.
But in our current moment, when tech and tech governance threaten to undermine human autonomy and centralize power, liberalism does need something like the corrective the ancients supplied–especially their sense of seeing the value of technological tools as having their primary justification as equipment for flourishing.
We risk losing sight of the tool-like character of our digital instruments
In the case of AI, preserving the distinction between the ends of human flourishing and the instrumental character of technology is more fraught than ever.
Not only do we live with heightened forms of tech utopianism (see Transhumanism or Accelerationism), but even those who worry about its risks imbue tech with its own agency or “will.” And those who insist on the tool-like character of technology, though they are right about its instrumentality, often take this to imply its neutrality, which ignores the extent to which tech is intrinsically multivalent for the user.
In the case of AI, we have fashioned a set of tools in our image–tools that “talk,” “think,” and “create” like us…soon, in some cases, much better than us. We are at high risk today of losing sight of the tool-like character of our digital instruments. Indeed, some today believe that intelligent machines have conscious minds and can suffer pain, have personalities, and deserve rights, protections, etc.
At stake is the risk of drifting into a passive and fatalistic form of technological dependence, which transforms human beings into objects of administration and control through fully automated soft despotism.
Defending and understanding the uniqueness and dignity of the human person–the characteristic excellences that define thick and intrinsic notions of autonomy–and building tools as equipment for flourishing, thus becomes a most urgent necessity of the AI era. Thank you.
To read more about AI x Aristotle, check out my debate with MIT Professor Bernhardt Trout and this writeup on self-guided machines and other insights from Aristotle’s Politics.