Please ensure Javascript is enabled for purposes of website accessibility
AI Graphic

Artificial Ignorance

June 9, 2025

Share

The great Jewish philosopher Hans Jonas observed a seemingly irresistible human temptation to understand our machines in the image of the human functions they replace, and then to understand the replaced human functions in the image of the artifacts that supplant them. In the eighteenth century, it was the clock that provided the overarching image of ourselves and the universe. Today, it is information technologies and so-called “artificial intelligence,” the cumulative product of several centuries of continuous scientific and technological revolution, that provide the mirror in which we understand ourselves. 

This vicious circle reveals what is, to my mind, perhaps the deepest and most subtle of the dangers posed by AI, whose promise and peril both for medicine and society at large are otherwise well documented. To illustrate the particular danger I have in mind, I want to work backward from a little thought experiment I recently performed. 

For years, I’ve been thinking and writing about the nature of modern technology and the relationship between science, philosophy, and theology, which I regard not merely as an empirical or sociological relation but as an epistemic and ontological necessity that works itself out over the course of intellectual history. The argument is essentially a philosophical and theological one. So, I recently asked OpenAI to describe my understanding of the relationship between modern science, philosophy, and theology. What it gave back to me was journalistic, superficial, and distorted (though I repeat myself) in that it transformed the very nature of my thinking from a three-dimensional philosophical argument about the truth of things into an empirical description of two-dimensional matters of fact. Still, I could recognize myself in it. If a student had turned it in for an exam—and perhaps they have, for all I know—I would have given it a B, maybe even a B+ (if I was sitting in the backyard on a warm spring evening drinking a cold beer while I graded it). But when I asked OpenAI whether my account was true, it could not enter into the argument from the inside and render a philosophical judgment. All it could do was juxtapose it to other “matters of fact,” viewed from without, and say that “it depends largely on one’s worldview.” This question of truth, of what reality is and means and not just how it appears from this or that point of view or functions under these or those conditions, is an irreducibly philosophical and thus human question; indeed, it was the capacity to ask and discover the answer to this question that once defined the traditional Western conception of reason and humanity whose passing C. S. Lewis mourns in The Abolition of Man.

It is often only after a certain technology has been developed—after we have acquired or ensnared ourselves in some new form of power—that we discover what it is for.  

Consider, then, what an odd thing it is to think of AI as a form of intelligence. AI cannot apprehend the transcendent or make a principled judgment about the nature and meaning of things. It cannot think about, much less understand, such things. Not only is it unable even to pose the question of truth as more than a question of function or fact, but in fact it abolishes it. To say that truth “depends largely on one’s worldview” is to say there is no such thing. Think, then, on how it is still more odd to ask AI—a so-called “intelligence” that does not think, understand, or know—to do our “thinking” for us. It would be like developing an app to pray on our behalf.  

Let us return here to Jonas’s vicious circle. What we now mean by “artificial intelligence” did not just drop from the sky as a gift bestowed on us by the gods. It is both the reflection and the product of an “artificial” conception of intelligence that now determines what we think intelligence, thinking, and truth are—a conception, it turns out, that has a long philosophical pedigree. Francis Bacon in the seventeenth century had equated knowledge with power and truth with utility, with our success in analyzing nature into its component parts and identifying, predicting, or manipulating natural things and processes. His one-time secretary Thomas Hobbes may not have been the first, but he was arguably the most prominent of early modern thinkers to conceive of reason simply as calculation. Both conceptions of reason and truth have as their objective counterpart a conception of nature, drained of transcendent meaning and reduced to mechanical and biological functioning, in which questions of transcendent meaning—and thus rational standards for guiding our action—have no meaning. John Dewey and the American pragmatists further developed these conceptions. Reason is essentially experimental, truth is essentially experimental success, and nature is essentially whatever happens or can be made to happen through the ever-growing power of analytic science. And it was Karl Marx who provided perhaps the most succinct and arguably most famous summation of this developing view in his eleventh thesis on Feuerbach: “Philosophers have hitherto only interpreted the world in various ways; the point is to change it.” However, if the point is not to comprehend reality but to change it, then ideas need not be true or really even comprehensible; they need only be functional. They need only be effective at producing the desired result, which cannot ultimately be justified by any other criterion than the intensity with which it is desired. Indeed, on this functionalist conception of ideas, true and false in their traditional sense have no meaning, because the world to which they refer has no meaning.

Artificial intelligence is thus the perfect image of our prevailing conception of intelligence, a form of reason whose form and object is not truth but power. AI projects this power beyond a human scale. Many of the critics of AI are concerned that its computing and “decision-making” power will make human beings redundant or obsolete and that it will be resistant to human governance, especially as big data is shared and its algorithms initiate actions between multiple, interfaced systems. There is already built into science and technology a structural discrepancy between our power to act and our capacity to think. There are numerous reasons why this is the case. Scientific reason measures truth by success, by the realization of experimental and technical possibilities whose limits can only be discovered by perpetually transgressing them. This makes science and technology essentially and interminably revolutionary, with the revolutions spanning generations: because scientific truth is always provisional, because the production and exercise of scientific knowledge and technological power are inherently social, and because technology possesses its own causal agency and propagates its own effects. Our power to do thus not only exceeds our power to think, it also exceeds our control. The proliferation of means precedes the articulation of ends so that technology thus becomes goal-setting rather than goal-serving, as Jonas put it. It is often only after a certain technology has been developed—after we have acquired or ensnared ourselves in some new form of power—that we discover what it is for.  

What happens when we so completely conform our intelligence to the thoughtless, uncomprehending, algorithmic “intelligence” of AI, and we offload onto it the last vestiges of the intelligence that once defined our humanity—our memory, our attention, and our judgment?

All of this means that we know how to do things to ourselves, to each other, and to our posterity that we don’t know how to think about, and we seize hold of, indeed irreversibly alter, the most profound things in reality—including the meaning of our own nature—without thinking about them. Scientific and technological culture not only contains this structural discrepancy between our power to act and our power to think, it builds a disincentive to understanding or an inducement to thoughtlessness into our most authoritative form of reason. A scientific and technological culture is destined to become what Augusto Del Noce calls a semi-culture: a culture where we don’t know what we’re doing because we think and act without awareness of the fundamental premises of our own ideas. 

But we have still not reached the deeper concern that I mentioned in the beginning, which has less to do with the insidious idea that machines possess artificial intelligence than with the even more insidious idea that intelligence is artificial and mechanical, or digital, as the case may be. This is the other “side” of Jonas’s vicious circle. Another great Jewish philosopher, Hannah Arendt, once said that the problem with the new science of behaviorism was not that it was false but that modern conditions—and modern conditioning—were such that it might become true. The structural discrepancy between our power and our knowledge already means we never really know what we are doing. But what happens when we don’t even know what we don’t know? What happens when we so completely conform our intelligence to the thoughtless, uncomprehending, algorithmic “intelligence” of AI, and we offload onto it the last vestiges of the intelligence that once defined our humanity—our memory, our attention, and our judgment? Questions about the truth of human nature, or truth as such, have all but ceased to make sense to us already. What happens when such questions cease even to occur to us? 

Plato saw over two millennia ago that physicians skilled in the art of healing would be equally skilled in the application of poison. That the techniques of medicine be healthy and not lethal requires a knowledge beyond technique. Leon Kass has been warning us for decades of an impending crisis in medicine that now seems to be upon us—that without conceptions of human wholeness and human health beyond those biology and medicine can provide, the awesome technical powers of medicine could be harnessed to any end whatsoever and for any reason. Homeostatic functioning or enhancement is not enough to prevent this. C. S. Lewis, writing in The Abolition of Man when AI was not yet even science fiction, saw that if there were no transcendent truth of human nature, then there could ultimately be no reason for acting this way rather than that besides the felt intensity of our desire to act. 

The consequences are humanly disastrous, both for society as a whole and for medicine. As the biotechnical power of medicine has increased, the humanistic and religious springs from which Western medicine originated and that once defined its essence have been forgotten or, in some cases, forcibly suppressed. Medicine has become the very exemplar of our scientific and technological rationality, the goal of our common striving, the repository of our hope for salvation, and our unquestioned authority. In saying that the triumph of artificial intelligence would be humanly and medically disastrous, I do not mean merely that it would be bad for both, as if society and medicine were two closed systems that just happened to be lying side by side. Rather, I mean that in the regime of artificial intelligence, medicine will be one of the principal means by which these socially disastrous consequences come to fruition. Very broadly, we can suggest four ways or areas in which this might be the case. Each of these deserves a much deeper consideration than I can provide here. 

Bible V
Volume V Is Here

First, we have said that scientific and technological reason measures the “truth” of our ideas by their function, by whether they work, and thus creates a discrepancy between our power and our knowledge. The result for a technological culture premised upon and organized around the pursuit of interminable technical progress is a semi-culture, where it is impossible to understand the meaning of our own actions. But if the Western tradition of thought is correct that intellect is ordered by nature toward being and truth, then the suppressed questions about the truth and meaning of things are (ontologically) unavoidable, which means that we always answer such questions in practice without thinking very deeply or even honestly about them. These are the conditions under which medicine can be infected by ideology—political, futurist, scientistic, or utopian—of which there are ample historical examples over the last century. (Eugenics, which has never really gone away but only changed names to protect the guilty, was the scientific consensus a century ago.) In fact, it is tempting to say that scientific and technological reason, left to its own devices, is inherently ideological in the classic sense given to us by Marx. An ideology is an essentially instrumental form of thought whose true nature and function are other than what they appear and profess themselves to be. Ideology in this sense is inherently deceptive and often most deceptive to its sincerest adherents. 

Second, in a society whose collective raison d’etre is the interminable pursuit of scientific and technological progress, medicine possesses a special kind of authority. It is both the highest exemplar of our only publicly acknowledged form of “reason” and the goal and justification of our collective striving. But authority necessarily means political authority. Leon Kass has long warned us against the medicalization of all human phenomena. Under the rubric of “public health,” we have seen an ever more perfect fusion of medicine, biotechnology, and state power that I sometimes call biotechnocracy. This is why pediatricians now routinely ask adolescent children questions ranging from their gender identity to whether there are guns in the home. Such episodes testify to an expansive conception of medical concern, and they are destined to become more common, more invasive, and more comprehensive as medicine is more deeply fused with the surveillance capacities of big data and the power of AI. Organized medicine is increasingly one of the principal instruments through which the centerless sovereignty of biotechnocracy is diffused and its reductive vision of human nature enforced. We saw during COVID the growing clamor to “let science rule us,” and everyone on all sides of our political divide claimed science as their ally, because the authority of science is a perfect instrument for laundering and concealing irreducibly political judgments, allowing them to exercise political power while absolving them of political responsibility. And of course, this only increases the likelihood of medicine’s ideological capture. 

My third and final genres of concern are less directly social and political—though they are not without great social and political import—and more internal to the practice of medicine itself. The first concerns the atrophy of medical judgment as power and responsibility are offloaded onto the algorithmic decision-making of AI. We have discovered already after just one generation of digital life that our capacities for speech, attention, and memory have deteriorated alarmingly. What is to prevent an analogous deterioration in medical knowledge and medical judgment once that power is entrusted to AI? And finally, as our reason is conformed to the image of AI and we are deprived of any intelligible sense of transcendent nature, what is to prevent us from regarding the subject of medicine—the human patient—merely as a complicated algorithm, a definition of human nature already advanced by Yuval Noah Harari in his bestseller Homo Deus. This does not seem like a stretch. COVID has already shown us how easy it is to regard other human beings merely as vectors of disease. To paraphrase C. S. Lewis once again, either the human being is an embodied rational spirit subject to a natural, rational, and moral law that transcends him, or he is just a complicated mechanism to be prodded, pulled apart, and worked upon for whatever reason our irrationality might fancy, in which case we just have to hope that our prodders happen to be nice people. There is no third alternative.

We have reached the point in the conversation—very near, perhaps, to the point of despair—where my students usually throw up their hands and ask, “Well, then, what are we to do?” I am afraid I do not have a very satisfying answer, especially if “satisfaction” is judged by the same criterion of “effective change” that defines the “truth” in our pragmatic rationality. The structural discrepancy between our power, our knowledge, and our control means that technological systems such as artificial intelligence tend to take on a life of their own, like an artificial organism. At the level of medicine as a biotechnocratic system, the beast is so well fed—and well capitalized—that I suspect we have already passed the point of no return. All I can propose is that we try to reverse Marx’s dictum and worry less about changing this brave new world and more about understanding and interpreting it in the hope that our artificial ignorance does not become compulsory, automatic, and invisible. Perhaps this won’t be so ineffective at the end of the day, since truth is about all that stands between us and the abolition of man. 

This paper was originally presented at the second Sister Generose Gervais Faith and Medicine Symposium in Rochester, MN.