On the memorial of St. Thomas Aquinas, January 28, 2025, the Dicastery for the Doctrine of the Faith and the Dicastery for Culture and Education issued a document approved by Pope Francis: Antiqua et Nova (Ancient and New), a “Note on the Relationship between Artificial Intelligence and Human Intelligence.” As evident from several quotes of prior statements by Pope Francis, this document provides further explication of a theme that the pontiff holds to be of particular importance for today’s world.
After an introduction (I) to the topic, the document presents four main sections (II–V) followed by a conclusion (VI). The first part, entitled “What is Artificial Intelligence?” seeks to distinguish “between concepts of intelligence in AI and in human intelligence” (§6). The second section, “Intelligence in the Philosophical and Theological Tradition,” elucidates “the Christian understanding of human intelligence, providing a framework rooted in the Church’s philosophical and theological tradition” (§6). In the third part, “The Role of Ethics in Guiding the Development and Use of AI,” the dicasteries offer “guidelines to ensure that the development and use of AI uphold human dignity and promote the integral development of the human person and society” (§6). The fourth part offers comments on a series of specific questions. The conclusion then offers final points of reflection, flowing from what had been stated in the preceding sections. In all, the document is composed of 117 numbered sections or paragraphs.
Here, I will only highlight select points to give an overview of the main thrust of the document. In general, the document approaches AI as a technology that presents both potential benefits and potential dangers.
A fundamental danger regarding AI involves misunderstanding what intelligence truly is. By using the term “intelligence” with respect to both AI and human intelligence, a fallacious equation of the two uses of the term can arise (see §10). Such confusion could happen in two ways. One could falsely think that AI actually does everything that human intellects do, whereas in fact human intellects do a lot more than AI’s sole reliance on “statistical inference and other logical deduction” (§8). Conversely, but similarly, one could falsely think that human intellects only do—perhaps even in a less efficient way—what AI systems do, as if the human intellect were merely a biological computer.
The danger of such confusion is, at least in part, due to the fact that AI is becoming increasingly good at “mimicking some cognitive processes typical of human problem-solving” (§8). “This functional perspective is exemplified by the ‘Turing Test,’ which considers a machine ‘intelligent’ if a person cannot distinguish its behavior from that of a human” (§11). In truth, however, “AI’s advanced features give it sophisticated abilities to perform tasks, but not the ability to think” (§12). AI lacks “the full breadth of human experience, which includes abstraction, emotions, creativity, and the aesthetic, moral, and religious sensibilities. Nor does it encompass the full range of expressions characteristic of the human mind” (§11).
As a tool, AI can be programmed or employed in ways that serve the human and common good, or in ways that detract from the authentic and common good.
The document then goes on to summarize the concept of human intellect in the philosophical and theological tradition. This tradition understands intelligence “through the complementary concepts of ‘reason’ (ratio) and ‘intellect’ (intellectus)” (§14). On this point, St. Thomas Aquinas is cited: “‘The term intellect is inferred from the inward grasp of the truth, while the name reason is taken from the inquisitive and discursive process’” (ST 2-2.49.5 ad 3; §14). Intellection, then, involves apprehension of the reality presented to it, such as grasping the nature and meaning of things through the process of abstraction (in the technical, philosophical sense) (see §13 and §14). Reason, on the other hand, is a discursive, analytical process (see §14).
A key element of human intelligence is embodiment, which is something AI systems lack. The tradition proposes an “integral anthropology”—that is, it “views the human being as essentially embodied. In the human person, spirit and matter ‘are not two natures united, but rather their union forms a single nature.’ In other words, the soul is not merely an immaterial ‘part’ of the person contained within the body, nor is the body an outer shell housing an intangible ‘core.’ Rather, the entire human person is simultaneously both material and spiritual” (§16). At the same time, “the human person transcends the material world through the soul. . . . The intellect’s capacity for transcendence and the self-possessed freedom of the will belong to the soul, by which the human person ‘shares in the light of the divine mind’” (§17).
Humans are also relational, and human intelligence “is exercised in relationships, finding its fullest expression in dialogue, collaboration, and solidarity” (§18). Additionally, for humans, intelligence is not reducible to facticity or precise calculation. Rather, “‘the desire for truth is part of human nature itself. It is an innate property of human reason to ask why things are as they are.’ Moving beyond the limits of empirical data, human intelligence can ‘with genuine certitude attain to reality itself as knowable’” (§21). Ultimately, the “search for truth finds its highest expression in openness to realities that transcend the physical and created world. In God, all truths attain their ultimate and original meaning” (§23). For these and many other reasons enumerated in the document, it is tremendously important to recognize that AI systems do not grasp reality the way human intellects do. They are programs that mimic certain calculating powers and make statistical assessments based on the databases to which they have access to imitate human responses, but it is merely an imitation of some—not all—of what the human mind can do.

Furthermore, there are several ethical questions that need to be considered in the development and use of AI. As a tool, AI can be programmed or employed in ways that serve the human and common good, or in ways that detract from the authentic and common good. The dicasteries, therefore, are clear to point out that the responsibility for the ethical creation and implementation of AI systems falls on us humans, whether developers or end users, since AI systems are not capable of moral judgment themselves (see §39). AI can be used to assist human decision-making, but only humans can actually make decisions (see §43–48). Hence, the document urges that “regulatory frameworks should ensure that all legal entities remain accountable for the use of AI and all its consequences, with appropriate safeguards for transparency, privacy, and accountability” (§46). Relatedly, in the same paragraph, it warns about an overreliance on AI in decision-making processes. It would be a mistake to simply always do whatever the machine suggests.
There are ethical questions about AI that span a vast array of different areas. In the final main section, the document treats ten specific areas of inquiry on this front. Each is headed with “AI and” followed by an individual topic. The topics are society; human relationships; the economy and labor; healthcare; education; misinformation, deepfakes, and abuse; privacy and surveillance; the protection of our common home; warfare; and our relationship with God. There is too much information to summarize all those here, but I encourage the reader to ponder the points made in those sections. There are serious—sometimes disturbing—factors and trends that need to be carefully considered. AI could help, but it could also harm many people in a variety of ways: morally, intellectually, economically, or even with respect to one’s mental and physical health.
In the conclusion, the document calls for wisdom. “The vast expanse of the world’s knowledge is now accessible in ways that would have filled past generations with awe. However, to ensure that advancements in knowledge do not become humanly or spiritually barren, one must go beyond the mere accumulation of data and strive to achieve true wisdom” (§113). It is essential that we consider “‘whether in the context of this progress man, as man, is becoming truly better, that is to say, more mature spiritually, more aware of the dignity of his humanity, more responsible, more open to others, especially the neediest and the weakest, and [more ready] to give and to aid all’” (§109). Decisions will have to be made “at all levels of society, following the principle of subsidiarity. Individual users, families, civil society, corporations, institutions, governments, and international organizations should work at their proper levels to ensure that AI is used for the good of all” (§110).
AI systems are not infallible or omniscient, and they are not inherently benevolent. Their programming can be created with intentional bias in an attempt to socially engineer the opinions of the masses. In fact, “current AI programs have been known to provide biased or fabricated information, which can lead students to trust inaccurate content” (§84). Thus, wise oversight and application of such systems is a must if we—as a species—are going to reap the benefits of AI while avoiding the many—sometimes grave—dangers that it also poses.