Please ensure Javascript is enabled for purposes of website accessibility

Navigating AI with Lonergan’s Transcendental Precepts

April 25, 2024

Share

In this third installment of our exploration into the intersection of Bernard Lonergan’s philosophy and artificial intelligence, we delve deeper into the practical implications of AI in our daily lives. Our journey began with exploring the incommensurable gap between human consciousness and AI, highlighting human cognition’s unique and irreplicable facets. The subsequent discussion turned to the ethical design of AI, emphasizing the importance of recognizing and respecting this difference for the common good. Now, we pivot to the application of Lonergan’s transcendental preceptsbe attentive, be intelligent, be reasonable, be responsible, and be in love—as a map for navigating our interactions with AI systems. These precepts, quintessential to understanding the dynamism of human cognition and our ethical engagement with the world, offer an instructive framework for approaching AI with a mindset that fosters human dignity and promotes the common good. We will explore pathways toward a more ethical, responsible, and human-centric integration of AI into society by grounding our use and development of AI in these foundational principles.

Bernard Lonergan’s transcendental precepts serve as foundational principles for engaging with the world in a manner that supports growth, understanding, and ethical action. These precepts emerge from Lonergan’s comprehensive theory of intentional consciousness, which delineates the dynamic process through which humans come to know, understand, evaluate, and decide upon their engagement with reality. At the heart of Lonergan’s philosophical inquiry is recognizing human cognition as a structured yet fluid process deeply intertwined with what Lonergan calls the “unrestricted desire to know”—that is, the desire for truth and goodness.

Let’s take a look at these five transcendental precepts. 

Be Attentive calls for conscious awareness of our experiences, urging us to engage fully with the information and sensations we encounter. This attentiveness forms the bedrock of our ability to perceive and interact with the world, laying the groundwork for all subsequent cognitive activities.

Be Intelligent challenges us to seek understanding and insight, moving beyond mere data collection to grasp the meanings, relationships, and possibilities inherent in our experiences. This precept drives the human capacity for creativity, problem-solving, and the generation of new knowledge.

Be Reasonable emphasizes the importance of critical evaluation and judgment. It invites us to assess our understandings and insights against the backdrop of evidence and reason, guiding us toward truth by discerning which propositions are well-founded and which are not.

Be Responsible reflects the culmination of the cognitive process in ethical decision-making and action. It calls for choices informed by our attentiveness, intelligence, and reasonableness, emphasizing the need for moral, just, and contributive actions to the common good.

Finally, Be Loving encapsulates and transcends the previous precepts, positioning love as the ultimate motivation and end of human cognition and action. It signifies a self-giving and other-regarding orientation that seeks the true good of oneself and others.

In the context of AI, these precepts highlight the qualitative difference between human cognition and AI’s computational processes. AI, as a creation of human ingenuity, can simulate aspects of human cognitive processes such as data analysis (experiencing), pattern recognition (understanding), and executing programmed tasks (deciding). However, AI lacks the intrinsic capacity for self-awareness, moral reasoning, and the pursuit of the good inherent in human cognition. AI systems operate within the parameters set by their creators without the ability to self-appropriate or embody the transcendental precepts fundamental to genuine human intellectual and ethical engagement. 

For AI designers, the challenge and opportunity lie in appropriating these precepts as a framework for engaging with AI technologies. More specifically, these precepts serve as a guide for ethical interaction and a bulwark against the seductive yet deleterious narratives of technological determinism. This ideology, often propagated by technological giants like Google and Microsoft, suggests that the advancement and outcomes of technology are inevitable, thereby absolving designers of responsibility and misleading users into believing in the commensurability, or even superiority, of AI to human cognition. Such narratives not only obscure the intrinsic limitations of AI but also diminish the recognition of human agency and moral responsibility in shaping technology. Adopting Lonergan’s understanding of human intentional consciousness is a good way to protect ourselves from being deceived that AI systems are anything other than what they are, and definitely not anything commensurable with human consciousness.

By adhering to the precepts of being attentive, intelligent, reasonable, responsible, and loving, designers and users can cultivate a more critical and reflective approach to AI. This mindset encourages us to question and evaluate the ethical implications of AI technologies, fostering a culture of accountability where designers are called to answer for their creations’ social and moral impacts. It also empowers users to engage AI with discernment, recognizing the tools as tools created by humans, subject to human flaws and biases, yet capable of embodying human values and serving the common good when designed and used conscientiously.

By embedding these values into the design, deployment, and use of AI, humans can ensure that technology serves to enhance, rather than diminish, the human experience.

Embracing these precepts helps counteract the allure of viewing AI as a panacea or possessing intrinsic moral value. It clarifies that the true value of technology lies in how it is designed and employed to enhance human well-being and foster a just society. This perspective is crucial in the face of narratives championing unfettered technological progress at the expense of human dignity. It counters any notion that machines could usurp the human role or diminish the intrinsic worth of human life and freedom. By asserting the primacy of the human person in the development and use of technology, we uphold a vision of a world where technology serves humanity, not the other way around, and where a commitment to love, justice, and the common good guides our collective technological endeavors.

More pragmatically, for designers of AI, this means being attentive to the ways in which AI systems process and respond to data, being intelligent in understanding the implications and limitations of AI, being reasonable in making ethical decisions about AI design and use, being responsible for the impacts of AI on individuals and society, and approaching AI development and interaction with a loving concern for the dignity and well-being of all people. By embedding these values into the design, deployment, and use of AI, humans can ensure that technology serves to enhance, rather than diminish, the human experience.

For everyday users, appropriating Lonergan’s transcendental precepts in their interactions with AI means cultivating a proactive and critical stance toward technology. Being attentive involves more than passive consumption; it requires users to actively seek an understanding of how AI impacts their lives and society. This could mean learning about the data privacy practices of the apps they use or understanding the biases that might be present in AI-driven news feeds. Being intelligent in this context means applying critical thinking to evaluate the information and services provided by AI, discerning their reliability and the motivations behind them. Users should question the sources of their information and the algorithms curating their digital experiences, striving to understand the underlying systems that shape their perceptions and behaviors.

Being reasonable involves making informed choices about technology use and recognizing the potential for AI to influence personal and societal well-being. This could include setting boundaries around AI-driven social media to prevent misinformation or ensuring AI tools used in educational settings promote inclusive and unbiased learning. Being responsible requires users to consider the broader consequences of their technology use and advocate for ethical practices in AI development and deployment. This might involve supporting companies that prioritize data ethics or participating in dialogues about the regulation of AI technologies.

Finally, being loving in interactions with AI emphasizes the importance of maintaining human connection and community in the digital age. It encourages users to leverage AI in ways that foster relationships and support networks, rather than allowing technology to isolate or divide. By appropriating these precepts, everyday users can navigate the AI landscape with awareness, intentionality, and a commitment to ethical engagement, ensuring that their interactions with technology contribute positively to their own lives and the wider community.

In our exploration of Bernard Lonergan’s philosophy in the context of artificial intelligence, we’ve delved into the unbridgeable divide between human consciousness and AI, explored the ethical implications of AI design, and considered how Lonergan’s transcendental precepts can guide our interaction with AI. What we have tried to do is underscore the need for a human-centric approach to AI, where technology serves to enhance human dignity and the common good rather than promulgate narratives that put them in jeopardy. The crucial takeaway should be to make an explicit commitment to engaging with AI responsibly, always mindful of our unique human capacities for attentiveness, intelligence, reasonableness, responsibility, and love.