The Future of Personhood and the Ethics of Artificial Intelligence

Artificial Intelligence (AI) has captured the attention of a wide variety of individuals and groups: junior programmers seeking help with coding, university administrators concerned about academic dishonesty, and healthcare professionals seeking tools for analyzing patient data. Even Pope Leo XIV has raised concerns about AI’s implications.

Responding to this cultural moment, the Institute for Human Ecology and the Thomistic Institute chose AI as the theme for the 2025 Civitas Dei conference. Drawing on such thinkers as Saint Augustine and Saint Thomas Aquinas, the conference speakers brought clarity to many aspects of the conversation surrounding AI. Two aspects have stuck with me as worthy of reflection: first, the need for greater anthropological reflection in general; second, the promise of Karol Wojtyła’s anthropology in particular.

First, Father James Brent, OP, suggested that conversations about the ethics of AI will be enriched if they begin by establishing a robust anthropology of the person. His point is quite simple but necessary at this cultural moment: We cannot talk about what is good or bad for a person (ethics) until we have a clear idea about what a human person is (anthropology). However, arriving at a clear idea of the human person is difficult, especially in a time when many misleading anthropologies are on offer. For instance, prevailing materialist anthropologies only consider humans in light of measurable metrics, such as efficiency, economic impact, and the like. But the human being is more than matter, so we must think about humans beyond mere metrics. We need an anthropology that captures the whole realm of human experience: one that includes but extends beyond efficiency and accounts for the importance of personal engagement in certain tasks, like a father writing a speech for his daughter’s wedding, or a teacher meeting personally with students rather than sending out AI-generated responses to their email enquiries.

Second, Father Brent responded to the need for an adequate anthropology by appealing to Karol Wojtyła. Building on Aquinas and drawing out the subjectivity of the person in light of the phenomenological tradition, Wojtyła developed an anthropology that frames personhood under six properties: self-possession, self-governance, self-determination, transcendence, integration, and participation. Growing in these six properties means growing as a person; diminishing in these six properties means diminishing as a person. Equipped with an understanding of these six properties, one can see AI’s effect on the person much more clearly.

For example, let’s use Wojtyła’s anthropology to consider one effect of AI: “de-skilling.” When a student uses ChatGPT to write his Intro to Philosophy paper, what happens to his ability to write? When AI drives my car, what happens to my ability to drive? When judges make choices based solely on AI, what happens to their ability to deliberate? In each of these instances, a person slowly loses a skill. But what is significant about losing those skills? Appealing to Wojtyła’s anthropology is helpful in these examples because it identifies why losing a certain skill can be problematic: It involves a loss of self-governance. When I offload my writing to ChatGPT, my ability to write well (and think creatively) atrophies, and I become ever more dependent on ChatGPT. In Wojtyła’s vocabulary, I am diminishing in my self-governance. What is at stake here, ultimately, is not my ability to write but my flourishing as a person.

But there is a reason why people give up their self-governance — AI can do laborious tasks in a fraction of the time it would take a human agent. What if it is worth giving up self-governance for a greater human good? Using AI to find new drug combinations to treat rare diseases is one example of giving up self-governance. AI does the discovering, not the researchers. Is it worth giving up self-governance in that situation?

These two ideas from Father Brent — the need for anthropology to ground ethics and the promise of Wojtyła’s anthropology — are helpful additions to conversations about AI. But they leave us with an important task — applying these (and other relevant) principles to the complexities of our concrete circumstances. This is a task both promising and perilous, as the brief examples above suggest.

But perhaps more important than developing a robust moral theory, we need moral exemplars. Even with the best philosophical categories of the person, we will be missing something important if we do not have an example to follow. We need Wojtyła’s six categories of the person, but we also need people to show us what it looks like to live out those six categories in the concrete circumstances of our lives, which now include many and various uses of AI.    

Above: OSV News photo, CNS file

Join our weekly newsletter to receive relevant updates and news about our upcoming events

The Future of Personhood and the Ethics of Artificial Intelligence