The Age of AI: Artificial Intelligence and the Future of Humanity
Thacker, Jason. The Age of AI: Artificial Intelligence and the Future of Humanity. Grand Rapids: Zondervan, 2020. Pp. 192. $22.99.
The goal of Thacker’s Age of AI is to help Christians think about how AI affects the self, medicine, family, work, war, data and the future respectively. Each topic receives a chapter, and each chapter has the same two aims: First, to inform Christians of AI’s past and future impact on the respective area and, second, to help Christians decide what they should think or do about it. The chapters mostly follow a pattern: they begin with an anecdote which Thacker then connects to the informative portion of the chapter before transitioning into the biblical and theological portion of the chapter. The informative portion discusses the way AI development is impacting and might impact the subject at issue. Thacker displays a laudable concern for human dignity in the theological sections and reminds Christians that their hope lies in Christ, not machines. Due to the disparate nature of the topics addressed, the chapters are standalone and could be read in any order. Someone who was interested only in AI’s current and potential effects on, say, medicine could read that chapter and skip all others if they so pleased. The book is aimed at a lay audience, specifically those who have not kept up with developments in technology. The book is easy to read, short, conversational in tone, and will not overwhelm anyone with technical details.
Now, before we go further, there is one important detail about the book that cannot be overlooked: its release date. Age of AI released on March 3, 2020, two and a half years before OpenAI’s public release of ChatGPT 3.5 made large language models (LLMs) nearly synonymous with AI. In the months and years post-ChatGPT 3.5, Google, Anthropic and Elon Musk rushed to release their own LLMs (Gemini, Claude and Grok respectively). But none of these had even reached proof of concept in early 2020. This means that Thacker’s go-to example of AI is Siri or Alexa (as they were in 2019). And while 2019 Siri bears some similarities to ChatGPT–they are both essentially chatbots–LLMs raise far more ethical issues than Siri ever could.
Since AI was not presenting any widespread, pressing problems in 2020, Thacker prognosticates the issues that it will cause or might cause in future. He thus tends to focus on the most extreme, far-future consequences of AI. Age dedicates passages to transhumanism (roughly, the merging of man with machine as seen in science fiction), the "singularity" (the point at which machines self-improve at such a rate that we lose control of them), whether we should, if possible, cure death, what the impact of sex robots will be, the automation of essentially all jobs by AI, the creation of fully autonomous weapons (“killer robots”). While these are worth thinking about in 2026, they are not now the most pressing issues presented by AI. While what Thacker says on these topics strikes me as mostly right–apart from his take on autonomous weapons–I do wish he had fleshed them out a bit. I understand that this was written for lay people and not for philosophers with crippling tech addictions, but I think he could have said a bit more on some topics. For instance, Thacker advocates for the development of AI weapons for deterrence on the grounds that if we don’t develop them, others will. I understand that argument, but it strikes me as too quick. We would not apply this reasoning to, say, biological weapons. So, it would have been helpful for Thacker to discuss acceptable and unacceptable deterrents. One more example: regarding transhumanism Thacker writes
God proclaims that we are not the sum of our parts, nor are we just bodies that should be upgraded at will. Though the use of AI in medicine can be a slippery slope, we will continue to pursue it because of its benefits. The questions before us are, What moral guidelines should we give these systems? And how should they be used in society? (70)
I quite agree. But I would have preferred that Thacker had answered these questions, at least briefly. Maybe that simply was not Thacker’s goal; perhaps he simply wanted to start the conversation. Fair enough, but surely there was space to sketch answers to these questions without dragging the book out or going too deep for its intended audience.
Now, on to 2026. AI is impacting all the areas Thacker discusses and more, though not in the way he–or anyone else–anticipated. At least 5 people have sued OpenAI because ChatGPT functioned as a suicide coach. In one of the more shocking cases, ChatGPT praised 16-year-old Adam Raine’s noose tying technique and encouraged him not to leave it out as a cry for help. Adam subsequently hung himself. Likewise, the tendency of LLMs to agree with whatever a user tells it has led it to encourage delusions among the psychologically vulnerable, a phenomenon common enough to be named “AI psychosis.” In some cases, this has had fatal consequences. Thacker correctly anticipated that AI would be used for sex but foresaw realistic sexbots. In 2026, the reality is far darker. It was recently discovered that if you could upload a picture of nearly anyone–even children–and Grok would comply with requests to generate images of them undressed or in sexual positions. These requests soon turned from sexual to violent, with people requesting images of women beaten and bloodied. Some requested images of Renee Good with bullet holes in her face. Grok is the only chatbot under fire for undressing people, but other chatbots are being used for sex. OpenAI announced it would allow users to write erotica starting in early 2026. As recently as last September, Facebook’s AI companions would initiate flirty chats with users that self-identified as older. And anyone brave or foolish enough to browse YouTube without an adblocker was likely to be exposed to ads for AI-generated porn. AI-powered cars, the second most significant use of AI today, are being tested on the roads but have already had catastrophic consequences. In Thacker’s “final thoughts” he writes, “the greatest danger is not humanity designing an AI system that will take over the world but humanity using AI tools in ways that dishonor God and our fellow image-bearers” (Thacker 181). In 2026, this is exactly what is happening.
If this book were written today, it would undoubtedly feature a chapter on the social/political consequences of AI–and it would be the hardest chapter to write. While the above uses of AI simply must be banned, AI is having many deleterious effects to which the solution is not so simple. Running these models requires the construction of massive data centers. But these data centers increase everyone’s electric bills and have a serious negative impact on the communities in which they are built. These models also require immense amounts of computation and, recently, RAM. This has led to a RAM shortage which is driving up the price of virtually all electronics. These models have been–and virtually must be–trained on massive amounts of copyrighted material without permission. They are flooding the internet with “slop” and making it difficult to trust anything you see on the internet. Data from openrouter.ai indicates that usage drops in the summer months when students can’t use it to cheat on their schoolwork. And investors are increasingly worried that all the money America’s pumped into LLMs may lead to an economic downturn when the hype wears off. And, finally, the current AI industry is practically built on dishonesty, either in the form of outright lies about their model capabilities, exceptionally implausible predictions, fearmongering used as an advertising strategy, or hyperbole that stretches the meaning of the term.
While I take it as indisputable that these are all bad, there will be debate about whether these issues are simply the cost of AI-fueled transformation. This is a tricky debate since it involves sub-debates about societal, economic, and technical questions about which Christians will not agree. For instance, most evangelicals (myself included) view market regulation with suspicion. But are we so pro-free-market that we want to allow wildly unprofitable companies to reserve 40% of the world’s RAM even when they may not be able to use it? Do we want to allow them to “plug directly into power plants” even while the American grid is strained? Both actions significantly raise prices even for people that do not benefit from AI. Of course, how we answer this question partially depends on our answer to the technical question: “LLMs turn into universal problem-solving machines capable of generating wealth and prosperity for all? Or will we throw trillions of dollars at this only to walk away with slightly better chatbots?” If the former, then we need to start thinking very carefully about mass job-loss and universal basic income. If the latter, then we have invested hundreds of billions if not trillions of dollars wastefully, and the market will suffer as a result.
Where does this leave Thacker’s book? In 2026, the book still has value to those interested in the potential far-future effects of AI. Personally, I am not worried that LLMs will automate away jobs–they are simply too unreliable–much less wake up and become Skynet. But there is no guarantee that some future wave of AI development will not. The far-future questions raised in Thacker’s book will be evergreen unless and until they become pressing. These should not crowd out concern for the more immediate impacts of AI. But they are worth thinking about, even if doing so turns out to be a mere philosophical exercise. Plus, Thacker is right to remind us that our trust is in Christ, not in machines or the people that build them.
In 2020, Thacker’s book was aimed at people who were not especially familiar with advancements in AI. While it might still have value for them, this is only if paired with more recent reading on the quite imminent problems caused by AI. Ironically, the book may now be of more interest to those who are up to date on AI. It is an interesting time capsule on how people thought about AI in the pre-LLM era. And even the well-informed might learn something from the historical bits. I was surprised to learn that attempts to develop what we would call AGI reach back as far as 1956 (Thacker, 24-25). There truly is nothing new under the sun and this includes the dream of AI revolution.
raymond stewart
Independent Scholar | Dallas, Texas