Please ensure Javascript is enabled for purposes of website accessibility
a glowing brain

Recovering the Common Good for Ethical AI Design

April 4, 2024


In a previous article, we explored the vast expanse separating human consciousness from artificial intelligence through Bernard Lonergan’s theory of intentional consciousness, which showed the uniqueness and incommensurability of the human mind. However, even if we accept that AI and human consciousness are nowhere near similar, we must still deal with the ethical and social issues that emerge as a consequence of designing and deploying AI technologies. For this reason, it warrants now turning our attention to the application of these insights. 

The previously explored concept of humans as beings in pursuit of meaning, driven by a dynamic, self-reflective awareness, provides a solid philosophical foundation for advancing into new territories—specifically, how this understanding can and should inform the ethical design of AI. Our previous reflections highlighted the inability of AI to replicate the full spectrum of human cognition and underscored the importance of recognizing and respecting this difference. Now, we seek to harness this recognition to recover and promote the common good within the realm of AI development, ensuring that as these systems become more integrated into the fabric of our daily lives, they do so in a manner that upholds and enhances the human dignity at the core of our faith and values.

Lonergan’s Theory and the Common Good

In his seminal work, Insight: A Study of Human Understanding (1957), Bernard Lonergan provided a comprehensive framework to understand human cognition through what he termed ‘intentional consciousness.’ At its core, this theory maps out the dynamic process by which humans come to know and understand the world around them. It is delineated into four cumulative and interconnected levels: experience, understanding, judgment, and decision.

Developers must critically assess the alignment of AI functionalities with the ethical principles that uphold the common good.

The first level, experience, is about the direct, sensory perception of the world. It’s where consciousness encounters data or facts. For instance, one might experience the warmth of the sun or the coolness of the breeze without yet making sense of these sensations. The second level, understanding, goes a step beyond mere sensation. Here, one actively questions and interprets their experiences, seeking to grasp the meaning or cause behind them. It’s the realm of insights, where patterns are recognized and concepts are formed, much like Archimedes’ eureka moment upon understanding the principle of buoyancy. Moving further into the depths of cognition, the third level, judgment, is where one evaluates the truth of their understanding. This is the critical stage where one assesses evidence or reasons to affirm whether an idea accurately reflects reality. The final level, decision, involves applying one’s judgment to direct action or belief. This stage is where the considered truths influence behavior or choices, culminating in the process of cognition with a tangible outcome or a change in one’s state of being.

Lonergan’s model emphasizes that this process is not static but a dynamic, ongoing flow of inquiry and affirmation, continuously seeking truth and understanding. This intrinsic dynamism of intentional consciousness is foundational to the human quest for knowledge and is distinct from AI’s programmable, deterministic nature.

Philosophical and Theological Perspectives on the Common Good

The common good in philosophical tradition speaks to the benefit of all community members. It encapsulates public resources and conditions that can be shared among the people, enabling them to fulfill their potential and lead meaningful lives. Philosophically, the common good has roots in the works of Plato, Aristotle, and the Stoics, later developed by Christian philosophers like Thomas Aquinas, who infused it with a theological dimension, defining it as the flourishing of a community in a manner that harmoniously integrates the good of individuals and the wider society.

From a theological standpoint, particularly within Catholic social teaching, the common good is intrinsically connected to human dignity and humans’ social nature as created in God’s image. It upholds the idea that true human fulfillment cannot be achieved in isolation but through relationships, justice, and peace within a community. It is where individual good does not compete with but is realized through the collective good; thus, it comprises “the sum total of social conditions which allow people, either as groups or as individuals, to reach their fulfillment more fully and more easily” (Gaudium et Spes 26).

Lonergan’s theory of intentional consciousness can guide AI development toward the common good. Lonergan emphasizes that true knowledge comes through self-awareness, critical reflection, and a commitment to authenticity in understanding. When applied to AI, this theory suggests that ethical AI design must encompass a process beyond technical efficiency to encompass the values of the society it serves.

Light of the Sacraments
Get The Book

For AI to contribute to the common good, it must be designed by designers who are self-appropriated; that is, they are aware of the structure of intentional consciousness and its social implications. They must be aware of the values their designs embed and promote in society. Those developers must critically assess the alignment of AI functionalities with the ethical principles that uphold the common good. This requires an engagement with the deeper questions of purpose, value, and the ultimate end that AI should serve, grounding design choices in a philosophical and theological understanding of human welfare and societal flourishing.

Appropriating Lonergan’s framework explicitly into AI design means advocating for AI systems that support human dignity, foster community bonds, and facilitate conditions for the common good. It calls for a participatory approach to AI development, where stakeholders contribute to outlining the objective values and principles that AI should embody. It also demands ongoing reflection and dialogue about the role of AI in society, ensuring that it remains a tool for human progress within an ethical and communal context. By engaging with these philosophical and theological principles, AI developers can work towards creating systems that perform tasks efficiently and contribute positively to society’s moral and social fabric, thus serving the common good. 

Let’s look at how this can be done in practice. 

Experience and Ethical Data Gathering

In exploring the application of Lonergan’s theory of intentional consciousness to the realm of artificial intelligence, particularly in ethical data gathering, we turn our attention to the first level: experience. For Lonergan, experience is the ground level of consciousness, encompassing the raw sensory input and data that inform further cognitive processes. In the context of AI, this corresponds to the data collection phase, which forms the foundation upon which all subsequent AI learning and decision-making are built. This stage is critical, setting the tone for how AI systems interpret and interact with the world.

Ethical data gathering in AI, guided by the principles of Lonergan’s intentional consciousness, necessitates a holistic approach to the acquisition of data. Just as human experience is not merely passive but is accompanied by an intrinsic valuation of what is perceived, the process of data collection for AI must be executed with discernment. It requires careful consideration of the sources from which data is obtained, the methods employed to collect it, and the potential implications of its use.

This approach to AI data collection would prioritize respect for individuals’ privacy and autonomy, that is, their freedom to do what they ought to do. It acknowledges that just as personal experiences are private and subjective, data derived from individuals carries with it a responsibility toward the person behind the data. Ethical data practices entail obtaining informed consent from individuals, ensuring the anonymity of personal information, and implementing robust security measures to protect against data breaches.

The implications of data sources in AI extend beyond the technical to the ethical realm. Datasets are not neutral; they reflect the biases and contexts of their collection. Therefore, an ethical approach to AI data gathering involves actively engaging with the potential biases inherent in datasets. This includes the diversity (or lack thereof) of data samples, their socio-cultural assumptions, and the historical context from which they arise. By doing so, AI systems can be designed to recognize their limitations and account for them in their learning processes. By applying the first level of intentional consciousness to AI data gathering, we imbue the process with a moral dimension that respects individuality and seeks to mitigate the risks of perpetuating bias and violating privacy.

Understanding and Algorithmic Fairness

The second level of Lonergan’s intentional consciousness, understanding, involves actively seeking insights beyond the mere data. When applied to AI, this involves transcending the raw input to discern patterns, meanings, and implications. For AI to be ethical, it must embody fairness in its algorithms, moving beyond mere pattern recognition to understanding the moral context of its operations.

Fairness in algorithms requires more than just technical precision; it necessitates recognition and adjustment for biases that data may contain. This demands a comprehensive approach where algorithms are not only constructed with an awareness of the potential for prejudice but also continuously refined to ensure they do not perpetuate systemic inequalities. In human cognition, understanding involves the moment of insight, where one grasps the significance or solution to a problem. For AI, this is mirrored in the design of algorithms that can discern not just correlations but causations that are ethically aligned with the principle of justice that requires all persons to be left in the free enjoyment of all their rights (CCC 1807).

However, the challenge lies in encoding these ethical decision-making processes into AI systems. One potential solution is the implementation of machine learning techniques that can identify and correct biases within datasets. Another is the development of AI models that can explain their decision-making process, making it possible to audit and adjust algorithms to ensure they are making fair decisions. Ethical AI also requires competent teams that can bring different perspectives to the development process, helping to ensure that a wide range of values and norms are considered.

Transparency is a prerequisite for trust in AI systems, ensuring they operate according to the ethical standards and societal norms set for them.

Developing unbiased algorithms also means addressing the limitations of current AI systems. AI must be programmed to recognize the context in which data was collected and the purposes for which it will be used. This requires a form of understanding that looks beyond the immediate data to the broader implications of AI decisions on individuals and society. Ultimately, achieving algorithmic fairness mirrors Lonergan’s process of understanding—it is an iterative, evolving process that requires ongoing engagement with the complexities of human values. By incorporating these principles, AI development can aspire to create systems that not only process data but do so with an awareness of the ethical dimensions of their operation, contributing to a fairer and more just society.

Judgment and AI Accountability

In the third level of Lonergan’s framework, judgment is the critical evaluation of whether the insights gained in the understanding phase correspond to reality. When we apply this to AI, we enter the realm of accountability, where AI designers must not only create systems that process data but do so in a way that allows those designers to make design judgments that are aligned with societal values.

Designers should be able to evaluate the ethical implications of their actions. The systems they create should be able to reflect human decisions in a way that reflects an understanding of the broader social and moral context of their operation rather than substituting for them. This involves AI systems being designed to consider the outcomes of their decisions and the impacts those decisions have on various stakeholders. However, unlike human judgment, AI systems cannot possess intuitive knowledge or conscience, so their “judgment” is a programmed response based on the parameters set by their designers.

Transparency is crucial for AI accountability. It allows us to trace and understand how the AI system makes ‘decisions.’ Without transparency, it’s nearly impossible to hold the designers of these systems accountable for their actions. Moreover, transparency is a prerequisite for trust in AI systems, ensuring they operate according to the ethical standards and societal norms set for them. Accountability in AI also means having mechanisms in place for redress when AI systems make decisions that negatively impact individuals or groups. This could involve having oversight boards that review and audit AI decisions or legal frameworks that define the liability for damages caused by AI actions, like the recently passed EU AI Act

Ensuring AI accountability also requires a multidisciplinary approach. Developers, ethicists, theologians, sociologists, legal experts, and representatives from affected communities should collaborate to define what constitutes ethical AI behavior within specific contexts. Through such collaboration, AI systems can be programmed to weigh decisions against a backdrop of objective and universal human values and community impacts, ensuring their judgments are not made in a vacuum but in consideration of the common good.

AI systems cannot possess intuitive knowledge or conscience, so their “judgment” is a programmed response based on the parameters set by their designers.

Ultimately, embedding judgment into AI processes is a complex endeavor that calls for a commitment to continuous improvement and learning. AI systems, like humans, must “learn” from past decisions, refining their judgment criteria based on feedback and outcomes. This process, modeled after human intentional consciousness, will help AI systems make choices that are technically sound, ethically responsible, transparent, and accountable.

Decision and AI in Service of Society

In Lonergan’s theory, the cognitive process’s culmination is the decision stage, where understanding and judgment are put into action. Applying this final level to AI is about translating the algorithms’ “judgments” into actions that positively impact society, embodying the societal values and ethical considerations discussed in previous stages.

In real-world applications, AI’s decision-making is manifested in the form of actions, responses, or recommendations provided by the system. To ensure these decisions serve society, AI must be designed to benefit the public and enhance the common good. This could mean creating algorithms that help to efficiently distribute resources in humanitarian crises, AI that assists in diagnosing diseases with greater accuracy, or systems that optimize energy consumption to reduce environmental impact.

One case study that exemplifies AI’s positive contribution is using machine learning in precision agriculture. AI systems analyze vast amounts of data from satellite images, sensors in the field, and weather reports to make informed decisions on planting, watering, and harvesting. This results in more sustainable farming practices, higher yields, and a reduced environmental footprint. Another example is AI used in healthcare, such as developing predictive models for early detection of chronic diseases. By analyzing patterns in medical data, AI can alert physicians to early signs of conditions like diabetes or heart disease, allowing for earlier intervention and better patient outcomes.

To ensure AI’s decisions are ethically aligned and beneficial, there must be a framework for evaluating the real-world impacts of these systems, a feedback loop that informs developers and stakeholders about the outcomes of AI’s actions. By integrating a decision-making model that holds AI systems accountable to the common good and reflecting on case studies where AI has served humanity, we can continue to shape a future where technology is a steadfast ally to societal progress.

Charting a Course for AI

As we reflect upon the journey through the various levels of intentional consciousness and their application to artificial intelligence, we are reminded of the teachings of Catholic social doctrine that emphasize the common good, human dignity, and the imperative of moral action. These principles, deeply rooted in our faith, provide a compass for navigating the complexities of AI development. We recognize the potential of AI to serve humanity. Still, we must also acknowledge our collective responsibility to ensure these technologies are developed and implemented in a manner that upholds these sacred values.

The common good—a cornerstone of Catholic teaching—calls us to look beyond individual interests and consider the welfare of the whole community. In the context of AI, this means striving for systems that not only advance efficiency and innovation but also protect the vulnerable, promote justice, and enhance the quality of life for all people. We must be vigilant, ensuring that the rapid advancements in AI do not outpace our ethical frameworks or our commitment to the human person.

A collective commitment to ethical practices in technology is not just a recommendation; it is an imperative. It is an invitation extended to technologists and ethicists, leaders in business and government, and each member of society to actively participate in shaping the role of AI in our world. This collaborative approach ensures that a multiplicity of voices, especially those informed by our faith’s rich intellectual and moral traditions, contribute to the discourse on AI.