In the current discourse surrounding AI, the most unhelpful slogan being thrown around goes something like this:
“AI is just a tool. Tools aren’t good or bad, they’re morally neutral. It just matters how you use it.”
This is a half-truth, which is almost worse than a whole lie. Technology does have a sort of neutrality about it, though it is perhaps better thought of as ambiguity. A hammer can be used for good by smashing your smartphone, for instance. It could also be used for evil, such as when some lunatic attempted to use one to smash the Pietà. Good or bad, in this instance, largely depends on the person wielding it.
Still, this seeming neutrality disguises the fact that every technology carries with it its own ontology, its own way of being. While it is true that you maintain a free choice about how to use the hammer, its design predisposes you towards a certain action — that of hitting. Your use of a hammer changes the way in which you interact with the world, hence the old saying “if all you have is a hammer, every problem looks like a nail.”
The hammer is a very benign technology, but there is no reason to suppose that every technology is. In our own daily experiences we can see that certain technologies carry with them negative ontologies, they conform us to a worse way of being in the world. Moreover, there are other technologies that are designed for evil purposes. We shouldn’t be fooled about accepting these things simply because they come to us with the shiny label of “technology.”
Take the sword, as an example. There are multiple instances where scripture speaks of the sword as a shorthand for violence. Isaiah tells us that in the day of the Lord, the nations
shall turn their swords into ploughshares, and their spears into sickles: nation shall not lift up sword against nation, neither shall they be exercised any more to war.
In a similar vein, Jesus tells Peter
Put up again thy sword into its place: for all that take the sword shall perish with the sword.
The techno-optimist might look at these passages and say something like “There’s nothing wrong with swords themselves, though! Swords are just sharpened pieces of iron, totally neutral in themselves. We should instead condemn those who misuse their swords for evil purposes.”
This misses the point entirely. Sure, when we think about the sword in terms of its components it is “neutral.” Swords themselves, though, are not. The sword is made with a particular purpose; shedding blood. Even when Peter drew his sword for the most noble cause in the world, in defense of Jesus Christ, he was still rebuked for it. Even when it is taken up for the cause of justice, the sword’s pattern of being is just as deadly to its bearer as its target.
The person who takes up the sword is bound to “perish with the sword” not because they’ll eventually come to a fight that they can’t win, but because they’ve taken up a destructive way of being. Thus, when Isaiah speaks of turning the sword into a plowshare, I don’t think this is (merely) a case of metaphorical musing about a coming age of peace. If we really want peace, we must reject technologies that orient us away from peace. The sword itself must be destroyed and transformed into something new.
Closer to our experience in the 21st century, there are a myriad of newfangled technologies which carry with them negative ways of being. Televisions reorient our homes away from communion with our families towards communion with the screen. It replaces the hearth as a focal point, reorganizing our sitting rooms into place where cheap entertainment can be bought rather than a place where meaningful entertainment can be made.
Cars orient us away from our homes and towards the road. Russel Kirk called them mechanical Jacobins for a reason. Our use of them dissolves the normal bonds of community which have existed from time immemorial and creates a radically new sort of society.
Smartphones orient us away from those we are present with towards virtual reality. The very use of it habituates you to existing in a sort of asocial private world where other people become an intrusion. Smartphones serve to build virtual “community” at the expense of actual community.
Nuclear bombs orient their users into a way of being in the world that is simply evil. Since there is no moral way to use a nuke as a nuke, those who use them are necessarily initiated into an evil way of being in the world. Even just having access to them radically alters the way in which leaders interact with other nations throughout the world. The list could go on.
This isn’t to say these technologies or others like them have no material benefits. Obviously they do or they would never have been so widely adopted. The point is, our use of these technologies is not purely neutral. Our use of them (as with any technology!) actively changes the way in which we interact with our families, our communities, and our world. Importantly, though, this view requires that we stop thinking of “technology” as a monolith. “Technology” is neither good nor bad, but technologies can be either.
Once we realize this, we realize that the discernment of technology falls within the realm of moral reasoning. The question is now “is this technology good for me?” as well as “Is it good for my family and my community?” It even scales all the way up to “is this technology good for the world?” These are ultimately moral questions and, being human, we are obligated to answer them. We are able to say “I think this technology is good and useful while that technology is bad and harmful.” That isn’t hypocrisy, it’s judgement.
We can bring those same powers of moral judgement to bear on AI today. AI probably offers benefits to its users, though it seems that most consumers don’t think so. If nothing else, I’ve heard tell that Chat GPT’s research function is remarkably useful and even fairly accurate. I’m sure there are other great uses, too, that I’m just unaware of.
The question we need to ask now is what sort of ontology does generative AI bring with it? Is it actually beneficial for human flourishing? I have yet to see anything that convinces me it is and have seen a bit that convinces me it isn’t. Admittedly, that may stem more from confirmation bias and ignorance than from reality. If we want to be able to think well about AI, we need to be able to answer that question first. That’s hard work, but it’s the price of moral freedom.
P.S. I hope for this to be my last piece on technology for a little while. It’s felt as if I’ve become a tech blogger lately and I’d rather explore some other topics. If there’s anything you’d be interested in seeing from the Rood, let us know in the comments!
Best anti-AI argument I've seen. Great essay
A tech blogger is generally a booster of tech, part of the general marketing apparatus around the idol of progress in our culture. In contrast, your writings about technology are genuinely thoughtful (and useful) reflections about an important area of human experience in which the Church, in her official teaching capacity in our tech-obsessed nation, is woefully undistinguished.