This article was originally published for The Pillar on December 16, 2025.
When the topic of AI comes up in conversation, I tell people that we should kill it before it kills us. I am only half joking. Some days it’s less than half.
Now, I have read and listened to some of the “doomers” who believe that AI will bring the end of the human race. They say this would likely occur after the development of artificial superintelligence makes AI “conscious” and it decides that it should kill off the human race in order to have a monopoly over energy resources. But that’s a discussion for another time.
I understand that AI is here to stay and will continue its quick infiltration into more and more aspects of our lives. I know that there are many ways that this technology can be useful and I would be foolish to never use it.
So this summer I finally chose to voluntarily dip my toe into the world of AI.
I started to use ChatGPT to help me do research for my book and for a trip. I quickly had a number of complaints with the experience.
First, I despised the obsequiousness. Its response to every query started with some version of “you are so brilliant to ask that!” I hate when people suck up to me in such a transparent manner because it makes me not believe a word they say. Plus, I think they are probably laughing at me behind my back.
Second, I was perturbed by the tone of absolute confidence in the correctness of its response. Instead, it should respond: “after scouring the internet at a speed you cannot comprehend, the sources found suggest this might be the answer.”
This was especially troubling when I was given information that I knew was wrong.
Third, it gave me quotes from written works which were fabricated. Numerous times I was shown a quote from a particular Federalist Paper only to look for it and find it was not there. It was a good thing that early on I got into the practice of always checking sources.
While these features disappointed me, I continued to use it – with caution. (I was aided by a friend who created a way for me to get responses that checked the chatbot’s “attitude.”)
But there was one aspect of this tool that was much more troubling. It was very purposefully anthropomorphized. The responses were composed in a way that tried to make me think that I was writing back-and-forth with a real person. For anyone who has used an AI chatbot this is not a revelation. I had read that this was the way these work, but it was not until I experienced it that I became more frightened of the possible impact.
On top of that, I know that text chatbots pose a much smaller potential for harm than more sophisticated AI which mimics human voices and thought patterns.
This is attractive to many users and may have real potential value. But it disturbs me. That’s because I came to realize that, along with all of the other potential grave problems AI could unleash, the greatest could be the most inconspicuous – a loss of understanding what it means to be human.
I admit that I had never spent much time deeply contemplating this. I was trained to be an engineer, not a philosopher or theologian. And while the Church told me I was soul, mind, AND body, it never intuitively made sense to me. I place a high value on the life of the mind. That’s why I am an academic. I was inclined to see my body as just a useful instrument while I am on earth. But in order to avoid the heresy of Gnosticism, I accepted the Church teaching and tried to make sense of it.
But AI has made Gnosticism more than a philosophical or theological debate. This was made clearer to me a few years ago when a young relative said that he was going to download his brain to a computer so that he would live forever. My reaction came from the gut: that’s not living. It was the negation of the body and soul as essential parts of a human being.
Then I started reading more about the possibility of artificial superintelligence gaining “consciousness,” as some call it.
It was then that I realized that it was not my background as an engineer, a social scientist, or a politician that would be most important as I contemplated how to address AI. Certainly, there are important technological, social, and policy questions that need to be answered. None of these, however, are as important as determining what to do about AI’s challenge to the proper understanding of what it means to be human. But does anyone involved in the development and deployment of AI has an incentive to do this?
I had these concerns in mind as I recently participated in a symposium on AI. Tech companies and other researchers presented findings from polls testing public attitudes toward different uses of AI. Some of the data presented involved people’s views regarding “relationships” between humans and AI, including “romantic” ones.
The results, and the questions themselves, only added to my distress.
But then something surprising happened. I struck up a conversation with the person sitting next to me who eventually revealed that he is a Christian who had some similar thoughts about the perils of AI. Someone overheard the two of us talking at lunch and said he was a Christian and has written on the subject of God and AI. Later on, another person strongly concurred with my suggestion that we will need a renewal of Christian faith to successfully deal with challenges to democracy and with AI.
At the end of the event, as participants were giving closing thoughts, the last person who spoke suggested that the next event should take a “philosophical” approach to AI.
I had assumed that I would come out of this symposium more concerned than ever about the impact of AI. In some ways I did. But I also found that there are Christians out there grappling with how to deal with AI who know we must begin with first things.
It is going to be a difficult road. We will find ourselves at times fighting against the private and public economic incentives of AI, as well as – potentially – national security interests. The Catholic Church has a special role to play in leading this, but I do not expect business leaders or policymakers to be directly swayed by the voices of priests, bishops, or even the pope.
That doesn’t mean that they should not try. But it is more likely that lay Catholics – aided by learned clerics – can have an impact by joining with other Christians who understand the truth and what is potentially at stake with the rise of AI.
Without a proper understanding about what it means to be human, AI will only lead us more quickly down a dark path. I do have some hope of avoiding this fate. But there are moments I just want to be a luddite and smash the machines.