Please ensure Javascript is enabled for purposes of website accessibility
a digital graphic of a brain

The Vatican’s AI Ethics Trajectory

February 13, 2025

Share

The Vatican has positioned itself as an influential moral authority in global discussions on artificial intelligence, framing the debate not merely as a technical or regulatory challenge but as a fundamental question about the future of human dignity and ethical responsibility. Over the past few years, it has actively shaped conversations on AI governance, moving beyond abstract ethical pronouncements to engaging directly with policymakers, corporations, and international institutions. This strategic engagement has been particularly evident in the transition from the Rome Call for AI Ethics (2020) to the recent Antiqua et Nova. This development signals both an evolution in the Vatican’s stance and an increasing recognition of AI as a domain requiring moral and political intervention.

The Rome Call, a document issued by the Pontifical Academy for Life in collaboration with major technology firms such as Microsoft and IBM, sought to establish a foundational ethical framework for AI, emphasizing principles of transparency and accountability. However, it was essentially a broad declaration, reflecting an initial attempt to insert Catholic social teaching into AI discussions without formulating a clear political strategy. In contrast, Antiqua et Nova represents a significant refinement of this engagement. Not only does it address AI’s potential risks and ethical dilemmas with greater specificity, but it also aligns these concerns with the Vatican’s broader socio-political agenda—especially regarding labor rights, economic justice, and human dignity in the face of automation and algorithmic decision-making.

This shift from an aspirational ethical statement to a more developed doctrinal and policy-oriented position suggests that the Vatican is actively seeking to influence international AI governance. The document’s publication comes at a time of heightened global debate over AI regulation, particularly in multilateral forums such as the G7 and the United Nations, where discussions on AI oversight are accelerating. The Vatican’s increasing participation in these arenas signals an intent to position itself as a key voice in shaping AI policies that resist both corporate technocratic dominance and state-driven surveillance models.

But how has this evolution from thought to action evolved? What are this transformation’s theological and political underpinnings, and what are the broader implications for global AI policy? By analyzing the shift from theory to policy, we explore what this trajectory reveals about the Vatican’s strategic engagement with AI governance. Not only this, but I want to assess whether this evolving stance could influence international discussions—particularly within the European Union and the United Nations—where debates over AI ethics, transparency, and accountability are becoming increasingly central to regulatory efforts. Ultimately, I argue that the Vatican’s deepening involvement in AI governance represents a broader trend: the increasing intersection of ethics, politics, and technological power in the global AI landscape.

Vatican AI Ethics in International Policy Arenas

The Vatican’s transition from a primarily moral and philosophical commentator to an active participant in policy deliberations reflects a deliberate strategic shift. Rather than confining its ethical concerns to theological discourse, the Holy See is leveraging its moral authority to influence regulatory frameworks at the highest levels of global governance.

One of the most visible arenas of this involvement is the G7, where the Vatican has advocated for an AI governance model that prioritizes human dignity. In contrast to purely market-driven approaches that emphasize innovation and economic competitiveness, the Vatican’s interventions stress the dangers of algorithmic bias, the commodification of human decision-making, and the erosion of personal autonomy. Its push for a “human-centered” AI aligns with broader calls within the G7 to establish ethical guardrails against the unchecked deployment of AI technologies, particularly in critical areas such as healthcare, finance, and employment.

What Christians Believe
Get This $2 Book!

Similarly, within the United Nations, the Vatican has positioned itself as a counterweight to both the instrumentalist1 visions of AI promoted by major technology companies and the securitized approaches favored by some states.2 Its participation in UN AI initiatives reflects a commitment to embedding ethical principles—such as transparency, accountability, and respect for human rights—into global AI governance frameworks. By calling for international agreements that place human dignity at the center of AI regulation, the Vatican is working to ensure that AI does not become merely a tool for economic exploitation or geopolitical power struggles.3

The Vatican’s influence is also becoming increasingly visible in the European Union’s AI policy debates, particularly in discussions surrounding the EU AI Act. While the EU has been at the forefront of regulatory efforts, its approach has often been shaped by a balance between economic imperatives and fundamental rights protections. The Vatican’s interventions have emphasized the need for ethical considerations to take precedence over purely risk-based or market-driven regulatory frameworks. In particular, it has sought to highlight the social and moral implications of AI deployment in areas such as automated decision-making, biometric surveillance, and labor displacement.4 This growing presence in global AI governance discussions indicates a broader ambition: to offer an alternative to both corporate technocratic dominance and state-driven surveillance models. The Vatican’s advocacy does not merely reflect abstract ethical concerns but a concrete vision for AI governance that resists dehumanizing trends in technological development.

A crucial theme in Antiqua et Nova is the emphasis on AI transparency, aligning with broader international concerns over the opacity of AI decision-making. As AI systems become increasingly embedded in governance, defense, and economic infrastructures, demands for greater scrutiny of their development and deployment have intensified. The Vatican’s insistence on ethical oversight and accountability reflects this global trend, reinforcing the principle that AI should serve the common good rather than operate as an unregulated force driven by market or security imperatives.

This call for transparency resonates with recent policy shifts in the United States, particularly the increasing scrutiny of institutions like the National Security Commission on Artificial Intelligence (NSCAI). Established to advise the US government on AI’s role in national security, the NSCAI has been criticized for its lack of public engagement and opaque decision-making processes. Legal challenges, such as Electronic Privacy Information Center v. NSCAI, have forced the commission to comply with the Federal Advisory Committee Act (FACA), which mandates open meetings and the public release of key policy documents. This judicial intervention buttresses a growing global concern over the secrecy surrounding AI governance, a concern that the Vatican has echoed.

Thus, the Vatican aligns itself with broader movements demanding governmental accountability in AI policymaking in advocating for AI governance grounded in ethical oversight and human dignity. The intersection of Catholic social teaching with contemporary debates on AI transparency underscores a strategic effort to shape policy discussions in ways that prioritize human rights and democratic accountability. This places the Vatican in conversation with a broader coalition of civil society organizations, ethicists, and policymakers who argue that AI should not function as a black box of unaccountable decision-making but must remain subject to robust ethical and legal scrutiny. Rather than positioning itself as a mere moral commentator, the Holy See is actively contributing to the policy discourse, advocating for AI systems that are not only effective and innovative but also just, transparent, and aligned with the fundamental dignity of the human person.

The Vatican’s Potential Influence on AI Policy

Could the Vatican’s approach shape AI policy discussions in the European Union or the United Nations? While the Holy See does not wield legislative authority in these bodies, it holds significant moral and diplomatic influence that could shape the ethical framing of AI governance. There are several pathways through which this influence could manifest.

An Introduction to Prayer - Bishop Barron
Get This $2 Book!

First, the Vatican’s moral authority and soft power position it as a key voice in international AI ethics debates. Catholic social teaching has historically informed global discussions on human rights, labor, and economic justice, and its engagement in AI governance could inspire broader ethical guidelines. This is particularly relevant in regions where Catholic thought continues to shape public discourse, including parts of the EU and Latin America. 

Second, the Vatican’s stance could intersect with regulatory debates, particularly in the EU, as it refines its Artificial Intelligence Act. The EU has positioned itself as a global leader in AI regulation, emphasizing the need for frameworks that protect fundamental rights while fostering innovation. By advocating for policies grounded in dignity and the common good, the Vatican aligns itself with regulatory efforts that seek to mitigate algorithmic bias, prevent the commodification of personal data, and ensure AI serves the well-being of individuals rather than purely economic interests.5

Finally, the Vatican’s involvement in AI ethics could serve as a bridge between competing governance models. The global AI debate is increasingly polarized, with China’s state-driven AI model emphasizing centralized control and mass surveillance, while Western approaches often prioritize market-driven innovation with limited ethical constraints. The Vatican’s perspective—rooted in the dignity of the individual—offers a mediating ethical framework that challenges both unchecked corporate power and state-driven AI authoritarianism. This position could influence the Vatican to shape AI policies at international forums such as the G7, the UN, and multilateral organizations working on AI governance.

By explicitly engaging in AI governance discussions, the Vatican positions itself as a counterweight to both Silicon Valley technocracy and the growing use of AI for state surveillance. This evolving role accentuates its broader commitment to ensuring that technological advancements remain aligned with human values, reinforcing the principle that AI should serve humanity rather than dominate it.


1 For an understanding of the various narratives concerning technologies, like instrumentalism, see Technology Ethics: Responsible Innovation and Design Strategies (John Wiley & Sons, 2024), as well as a series of articles published by Evangelization & Culture Online, in particular “Beyond Computation: The Human Spirit in the Age of AI,” “Recovering the Common Good for Ethical AI Design,” and “Navigating AI with Lonergan’s Transcendental Precepts.”
2 Cf. Michael Lofton, in his discussion of Antiqua et Nova, falls into a similar instrumentalism-reductionism, saying, “AI can be directed toward positive or negative ends. And keep that in mind because, again, there’s a lot of people out there that say, ‘Like, oh, this is a demonic and evil tool.’ Look, it’s about how you use it. I could take a knife, and I could use it to cut some bread to make a sandwich, or I could use it to kill somebody. It’s not the knife that’s the problem; it’s the person, just like a gun. These are tools; you can misuse and properly use tools.” The Michael Lofton Show, “New Vatican Document on Artificial Intelligence!”, January 28, 2025, YouTube, 13:45–14:37, https://www.youtube.com/watch?v=1cr2wbmQE6c
3 Compare this with the explicit goals of the National Security Commission on Artificial Intelligence (NSCAI), see e.g.,  Whitney Webb, “Techno-Tyranny: How the US National Security State Is Using Coronavirus to Fulfill an Orwellian Vision,” The Last American Vagabond, April 20, 2020.
4 Cf. Francis, Address to the Participants in the Seminar “The Common Good in the Digital Age”, September 27, 2019, vatican.va; Laudato Si’ 18, 124–129, encyclical letter, May 24, 2015, vatican.va.
5 This idea can be further reinforced by the fact that on January 5, 2024, Prof. Paolo Benanti, professor at the Pontifical Gregorian University, was called to join the High-level Advisory Body on Artificial Intelligence of the United Nations.