Robot-Proof: Higher Education in the Age of Artificial Intelligence, Revised & Updated Edition

Aoun, Joseph E. Robot-Proof: Higher Education in the Age of Artificial Intelligence, Revised & Updated Edition. Cambridge, MA: MIT Press, 2024. Pp. 224. $24.95.

Joseph Aoun, president of Northeastern University, believes that colleges and universities need to offer a “robot-proof” education. By this he means that higher education should provide students the kind of training that will help them avoid being replaced by artificially intelligent robots in the workplace. In Robot-Proof: Higher Education in the Age of Artificial Intelligence, Aoun tells readers how “universities will have to adapt” (40). But he does not tell us why.

This is not terribly surprising. Aoun’s view of higher ed’s telos is almost entirely utilitarian (a fact reinforced by his preference for the barbaric word “utilize” instead of the more humane use of “use”). For Aoun, higher education is for workforce training (45) and upward economic mobility (8). Indeed, it seems that from Aoun’s perspective, human history itself is little more than ever-increasing economic productivity (23). Thus, he thinks we need a form of higher education to “empower us to utilize the AI-driven world to its fullest potential” (75).

To do this, Aoun proposes a model he calls “humanics” (74). C. S. Lewis once quipped that barbarous ideas ought to have barbarous names. While it would be unfair to characterize Aoun’s model as barbaric, the barbarous epithet is telling. (Perhaps “humaniX” would have been too clever by half.) It suggests a fusion of robotics and humans according to which the future is about little more than humans collaborating with robots. So, higher education needs to be in the business of teaching students how to work with with increasingly sophisticated machines.

To be fair, Aoun believes that the humanics model needs to lean into the “uniquely human attributes” (67). So, for example, Aoun believes that AI lacks “contextual understanding” (94, 106). Thus, since computers cannot decide what to do with information, humans should do this (58). (Though again, it is not clear why. Perhaps it is because computers lack a right-brain hemisphere.)

Among other things, the humanics model would ostensibly emphasize “new” forms of literacy: technological, data, and human (75). Technological literacy involves teaching students how technologies work. Data literacy is about information sorting (true/false; useful/irrelevant). Human literacy is about thinking, communicating, collaborating, and remaining flexible and creative in the face of change.

For those who have spent any time teaching in higher education, Aoun’s proposals are hardly groundbreaking. Once readers get past the surface of Edu-speak jargon (e.g., “project-based learning,” “experiential learning,” “cooperative learning,” “cultural agility,” “lifelong learning,” etc.), it becomes clear that most of what Aoun is proposing is little more than repackaging ideas that have been floating in the ether of higher education for the last 30 years or more. (Let us ignore the irony that this is a task for which ChatGPT is eminently qualified.)

Aoun writes, “education is meant to confer sound judgment” (123). So, I will close by offering my own. In the age of AI, Robot-Proof is not. Ironically, the word “robot” comes from a Czech word (robota) meaning forced labor, servitude, or drudgery. Once upon a time, a “robot-proof” education was simply called a liberal one: i.e., an education for freedom. But the robot-proof education that Aoun is proposing is one whose end seeks merely to outflank robots in the drudgery of economic servitude.

Contrary to what Aoun assumes, the most pressing question of our moment is most assuredly not: how do we train workers for modes of productivity that will not be outpaced by machines? If this is our deepest conundrum, then despite what Aoun thinks, upgrading our minds (95, 127) is a losing battle. If production is the point of human existence, then humans are destined for obsolescence.

In the age of AI, the most pressing question is simple. Do human beings matter at all? And if so, why? In ancient Babylonian mythology, humans mattered to the gods because they provided a cost-effective labor force – until they did not. In the twenty-first century, humans no longer represent a cost-effective labor force to the gods of Silicon Valley. Perhaps higher education needs to be rooted in a different mythos from the ancient near east, one in which human flourishing and the education that sustains it, is not measured in terms of economic marginal utility. But that would require higher education’s priestly class to serve a different god.


justin d. barnard

Professor of Philosophy, Honors Community | Union University

Justin D. Barnard