A New Arms Race in Artificial Intelligence
In recent years, nations have poured enormous resources into artificial intelligence (AI) as if engaged in a new arms race. Leaders frame AI as a strategic technology that will confer global power on those who master it. The US National Security Commission on Artificial Intelligence warned in 2021 that America was unprepared for the coming AI era and could lose its leadership to China within a decade if it failed to act. China, for its part, has declared AI a key to national rejuvenation. President Xi Jinping has called for “self-reliance and self-strengthening” in AI development as China vies with the US for AI supremacy. At a recent Politburo session, Xi urged leveraging China’s “whole national system” to accelerate AI innovation, recognizing current gaps but demanding redoubled efforts to close them. In this competitive climate, even startups are making headlines: Chinese company DeepSeek drew global attention by launching a new AI model trained on less advanced computer chips yet achieving performance on par with Western rivals—all at a fraction of the cost. Such feats underscore how quickly the field is advancing worldwide.
This race is not limited to the usual tech hubs. Military strategists speak of an AI arms race, fearing AI-powered weapons could tip the balance of power. Economic planners see AI as the “foundation of the innovation economy” and a source of wealth and security. Around the globe, AI has become a geopolitical chess piece. Yet amid the clamor to lead in AI at all costs, one critical question often goes unasked: What is AI ultimately for? In the rush to outpace competitors, there is surprisingly little public reflection on the purpose of these technologies in the first place. This silent question—why are we developing AI, and in service of what—may well determine whether AI serves as a tool of domination or a means of uplifting humanity.
Purpose Over Progress
The neglect of purpose in the AI race is not a new phenomenon. Pope Benedict XVI, writing over a decade ago, cautioned that when society becomes fixated on how to develop technology and ignores the why, technology can be mistakenly treated as self-justifying or even as an ideology. He warned that technological progress detached from deeper inquiry into meaning and values risks becoming ambivalent or harmful. “Technological development can give rise to the idea that technology is self-sufficient when too much attention is given to the ‘how’ . . . and not enough to the many ‘why’ questions underlying human activity,” Benedict observed. The result, he noted, is that we may be “entrusting the entire process of development to technology alone,” drifting without moral direction. In other words, if we race ahead without asking what ultimate good our innovations serve, we risk creating powerful tools with no compass to guide their use.
Technological progress detached from deeper inquiry into meaning and values risks becoming ambivalent or harmful.
This insight speaks directly to today’s AI landscape. Much of the focus is on faster algorithms, bigger data, and beating rivals to the next breakthrough. But to what end? The Catholic social tradition emphasizes that every human endeavor, including technology, should be oriented toward the common good and the dignity of the human person. The Church teaches that progress isn’t true progress if it deepens inequality or undermines human wellbeing. As Pope Francis put it, “Technological developments that do not lead to an improvement in the quality of life of all humanity . . . can never count as true progress.” This principle throws into sharp relief the silent question in the AI arms race: Are we developing AI simply to win and control, or to genuinely better the human condition?
Even some policymakers are beginning to grapple with this. The UN High-Level Advisory Body on AI has warned that a handful of companies and countries could impose AI on the world “without [people] having a say in how it is used,” unless governance catches up. International discussions increasingly note that AI’s benefits and risks must be shared and managed in line with human rights and human values. Still, concrete answers to the question of AI’s purpose remain elusive in global debate. Bernard Lonergan, a twentieth-century Jesuit philosopher, would likely see here a need for intellectual, moral, and even spiritual conversion. Lonergan emphasized the importance of an “unrestricted desire to know” oriented by wisdom and responsibility. In practical terms, that means stepping back from the frenzy to ask fundamental questions about meaning and purpose. We must be, as Lonergan urged, not just intelligent in developing new technologies but also reasonable in judging their value and responsible in how we apply them. His transcendental principles—be attentive, be intelligent, be reasonable, be responsible, be loving—suggest that true progress in AI will require careful reflection on what we are ultimately trying to achieve for humanity.
Insights from the Vatican
While tech superpowers spar over AI dominance, an unlikely voice has stepped in to reframe the conversation: the Vatican. In late 2024, Vatican City promulgated its Guidelines on Artificial Intelligence (Linee Guida in Materia di Intelligenza Artificiale) as a kind of ethical compass for the digital age. These guidelines, now in effect, challenge both secular and faith-based institutions to consider “How can we ensure that AI serves humanity without compromising our deepest values?” It’s a direct articulation of the silent question. The Vatican’s answer is rooted in enduring principles of Catholic social teaching. The guidelines present a bold vision of technology as a tool co-creatively wielded with God meant to preserve human dignity, protect the common good, and steward creation. They remind us that AI may enhance human capacities, but it can never substitute for uniquely human qualities like creativity, autonomy, or moral responsibility. In short, innovation must remain human-centered.

At the heart of the Vatican’s approach is the conviction that the human person must remain the protagonist of technology, not its victim or a cog in its machinery. One key principle states that technological innovation “cannot and must not ever surpass or replace the human being; on the contrary, it must be at [the human’s] service, so that technology supports and respects human dignity.” This emphasis on AI for the person echoes a core tenet of Catholic thought: Human beings are made in the image of God and endowed with inviolable worth. Any AI system, no matter how advanced, is ultimately a product of human ingenuity and remains a tool to be directed toward human ends. As the Vatican’s January 2025 doctrinal note Antiqua et Nova puts it, “Like any product of human creativity, AI can also be directed toward positive or negative ends.” It is not inherently good or evil; what matters is the intention and purpose guiding its design and use. The same document justifiably urges that AI be “directed toward serving the human person and the common good.” That means prioritizing applications that promote human flourishing—in education, health care, work, and care for the vulnerable—and resisting uses that undermine human rights or dignity.
Crucially, the Vatican documents tie these ethical guidelines to concrete action. Pope Francis repeatedly called for a “sane politics” that can orient AI toward the common good and a better future. Responding to this call, the Vatican’s AI guidelines seek to balance the extraordinary opportunities of AI with respect for fundamental values that “safeguard every person.” They advocate international cooperation and social justice in AI development, stressing that the benefits of AI should be distributed equitably rather than concentrated in the hands of a few. In a world where advanced AI could exacerbate inequalities, the Church is asserting a preferential option for the poor and marginalized: AI should not become a new divide between technological haves and have-nots. This aligns with broader Catholic social teaching on solidarity and the universal destination of goods—essentially, that the fruits of creation (now including digital creation) are meant for the benefit of all people, not just the most powerful.
Are we developing AI simply to win and control, or to genuinely better the human condition?
Domination or Flourishing?
The tension over AI’s purpose can be seen as a contest between two visions. One vision, currently ascendant in great power competition, views AI in zero-sum terms—a tool to gain economic and military dominance, even if that means edging into morally gray areas. Indeed, Antiqua et Nova warns of AI’s role in warfare, cautioning that autonomous weapon systems could accelerate conflicts “beyond the scope of human oversight” with potentially catastrophic impacts on human rights. We have already seen prototypes of this in use: AI can coordinate swarms of drones, conduct cyberattacks, and generate deepfake propaganda. Without ethical restraint, AI might become an instrument of control and violence wielded by states or corporations against rivals and citizens. It is telling that in international forums, agreement on military AI principles is hard to come by; in late 2024, the United States and sixty other nations endorsed a “responsible AI in warfare” blueprint, but China pointedly did not support the nonbinding accord. Such discord highlights how the arms-race mentality can overshadow even basic commitments to use AI in ways that respect human life and international law.
Opposing this is the vision espoused by ethicists, religious leaders, and the emerging global coalition AI for Good. This view holds that AI’s highest purpose is to serve human development and authentic progress—the kind of progress measured not just in GDP or arsenal size but in human well-being. Catholic thinkers often speak of integral human development, meaning development that is whole and humane: economic growth paired with spiritual, social, and moral growth. If AI is developed with an integral vision, it could greatly advance education, cure diseases, reduce drudgery, and help protect the planet. For example, AI systems are already assisting doctors in diagnosing illnesses and helping farmers optimize crop yields. These benefits, however, will only be truly realized if guided by an ethic that puts people first. As Antiqua et Nova observes, the rapid rise of AI has prompted many to reflect anew on what it means to be human in a world with machines that can mimic certain aspects of intelligence. The answer, from a Catholic perspective, is that being human is not just about problem-solving ability or data processing; it’s about our capacity for reason, moral choice, creativity, love, and communion with others. AI must therefore remain a means for enhancing these human capacities, not an end in itself or a replacement for human responsibility. In Pope Francis’s words, we need an “ethic of freedom, responsibility, and fraternity” guiding technology so that it fosters “the full development of people in relation to others and to the whole of creation.”
This humane vision does not reject AI or its astounding technical advances. On the contrary, it recognizes AI as a gift of human genius—one that, if rightly directed, can be part of “the collaboration of man and woman with God in perfecting the visible creation.” Here we find common ground between theology and secular humanism: Both agree that technology should improve the human condition. The silent question “What is AI for?” invites us to measure our innovations against that standard. Does a given AI application respect human dignity and promote the common good? Does it uplift the vulnerable and enhance our capacity to live in solidarity and truth? If so, it likely serves authentic development. If instead it exploits, divides, or degrades, then no matter how “advanced” it is, it fails the test of true progress.