Please ensure Javascript is enabled for purposes of website accessibility
Distorted image of worker behind glass at their laptop

Concerns Surrounding Lattice’s “AI Employees” Initiative

August 9, 2024

Share

It has now become impossible and, to a degree, even a cliché to bring up how saturated our world is with high technology; it increasingly permeates every aspect of our lives. More recently, the innovative use of artificial intelligence has seeped into many of our everyday spheres, whether we are aware of it or not. This has even become true within the domain of human resources, and this, unsurprisingly, has garnered significant attention. Recently, Lattice introduced an AI initiative that aimed to treat AI bots as official employees, complete with employee records and managerial structures. While this initiative demonstrates a commendable effort to harness new technologies for workforce management, it has also sparked considerable debate and concern within the HR community. What I want to do here is take a closer look at the underlying technology’s potential to deceive and the ethical implications of its use.

Following the introduction of their AI initiative, Lattice demonstrated their commitment to harnessing cutting-edge technology for workforce management. This ambitious project sought to create AI characters that emulate the roles of human employees, reminiscent of the organization’s innovative approaches in HR tech. The AI bots were designed to be integrated into the company’s employee records, providing managers with tools to onboard, train, and assign goals to these digital workers. Sarah Franklin, Lattice’s CEO, emphasized that the initiative aimed to explore new horizons in HR technology while maintaining a focus on responsible AI use. By combining advanced AI with robust HR practices, Lattice hoped to create a resource that would enhance productivity and streamline management processes in a modern, digital context.

Employees are entrusted with responsibilities that require personal judgment, emotional intelligence, and the ability to adapt to complex social dynamics.

Despite these good intentions, the launch of Lattice’s AI employees was met with significant backlash. Users and critics quickly highlighted several issues, ranging from practical concerns about the implementation to broader ethical concerns about representing AI bots as human employees. For instance, there were troubling questions about the impact of AI employees on human jobs and the moral implications of treating AI as part of the workforce. These concerns underscored the inherent risks of using AI in such contexts. Many were concerned that such a representation could lead to confusion and potentially undermine the value of human workers. In response to these valid concerns, Lattice swiftly decided to halt the initiative. This move demonstrated their responsiveness to feedback and their commitment to maintaining ethical standards in HR practices.

The use of AI systems to simulate human employees raises ethical questions. The workforce relies heavily on employees’ personal and relational roles, which involve genuine interpersonal interactions and professional relationships—elements that an AI, regardless of its programming, cannot authentically provide. The risk of users mistaking the AI for actual employees underscores the need for clear boundaries and the preservation of human workers’ unique and irreplaceable functions, which are the cornerstone of any organizational structure.

One of the primary ethical concerns is the potential for AI to blur the lines between authentic human work and artificial representation. Employees are entrusted with responsibilities that require personal judgment, emotional intelligence, and the ability to adapt to complex social dynamics. These duties are personal and grounded in the human experience, which AI lacks and can never have. For example, as noted in the feedback to Lattice, the initiative raised several practical and ethical questions, including the appropriateness of assigning managerial roles to AI and the implications for employee morale. These concerns underscore the fundamental limitations of AI in accurately replicating and upholding the human elements of workforce management. 

Even still, the ethical implications of using AI like Lattice did in a workplace context extend beyond practical inaccuracies. The presence of “AI employees” could inadvertently lead to reduced human interaction within organizations. Doing such would effectively undermine the essential relational aspect of professional environments. As industry leaders have often emphasized, the mission of any organization is grounded in personal encounters and community, elements that a machine cannot replicate. The risk of people turning to AI for critical job functions instead of relying on human colleagues is a serious concern, potentially diminishing the role of human workers and the richness of professional interactions. From an ethical standpoint, there is the danger of commodifying professional roles. By creating an AI that mimics human employees, there is a risk of reducing the sacred and deeply human aspects of work to mere transactions or interactions with a machine, thus falling further into the consumerist paradigm. This could contribute to a broader societal trend of depersonalization, where technology replaces human roles in areas where personal presence and genuine empathy are irreplaceable.

In light of these concerns, organizations must approach the integration of AI with caution and a sense of responsibility, which, thankfully, Lattice quickly did. It is essential to ensure AI tools are clearly presented as supplementary resources rather than replacements for human employees. By maintaining clear distinctions and reinforcing the irreplaceable value of human interaction, organizations can navigate the ethical challenges posed by AI while continuing to explore its potential benefits in supporting workforce management and productivity.

What is fundamentally at play here is the potential deception that such systems can induce, a deception that there is ‘someone there’ that we are actually interacting with. However, it could not be further from reality; there is ‘nobody there.’ In workplace contexts, human cognition and presence are of incommensurable value. Bernard Lonergan’s theory of intentional consciousness offers valuable insights in this discussion. Lonergan posits that human consciousness involves a dynamic process of experiencing, understanding, judging, and deciding—capacities that require self-awareness, intentionality, and moral judgment. Unlike AI, which operates based on algorithms and data processing, human cognition is characterized by its ability to reflect, discern, and make ethical decisions. This distinction is crucial in recognizing the unique qualities of human thought and professional depth that AI cannot replicate.

According to Lonergan, the process of human cognition is not merely about processing information. It involves a deeper engagement with reality, motivated by an “unrestricted desire to know”1 and an intrinsic pursuit of truth and goodness. This intentional approach to knowing and being in the world underscores the depth of human consciousness, highlighting why AI, despite its advanced capabilities, cannot fulfill the roles of human employees or provide genuine professional interactions. The ability to offer personalized professional guidance, empathize, and make moral judgments are human traits stemming from our intentional consciousness.

Ethical guidelines should prioritize respect for human dignity, recognizing the unique value of personal presence and the relational aspects of professional environments that technology cannot replicate.

Given AI’s limitations in replicating human cognition and the associated ethical concerns, it is imperative to establish guidelines for the responsible use of AI in workforce management. Transparency and human oversight are paramount to ensure that AI serves as a tool to complement, rather than replace, human interaction. AI can be a valuable asset in HR management by providing information and facilitating communication, but it must always be framed as a supplementary resource. To mitigate the risks of deception and maintain the integrity of professional interactions, AI applications should explicitly present themselves as tools, not substitutes for human employees. This involves ensuring that users are aware of the AI’s limitations and the importance of seeking genuine human interaction for professional guidance and support. Ethical guidelines should prioritize respect for human dignity, recognizing the unique value of personal presence and the relational aspects of professional environments that technology cannot replicate.

This implies that the development and deployment of AI in workplace contexts should involve continuous reflection and dialogue within the professional community. By engaging industry leaders, ethicists, HR professionals, and employees in discussions about the appropriate use of AI, organizations can steer the ethical complexities and harness the benefits of technology while upholding their core values and mission.

More broadly, Lattice’s initiative to integrate AI into its workforce management efforts reflects a commendable willingness to embrace new technologies for the greater good. However, it also highlights the need to consider such innovations’ ethical and practical implications carefully. By maintaining a clear distinction between human and artificial agents and prioritizing the dignity and authenticity of human interaction, the professional community can continue exploring AI’s potential while staying true to its mission of fostering meaningful and productive work environments.


1 Bernard Lonergan, Insight: A Study of Human Understanding. Collected Works of Bernard Lonergan, ed. Frederick E. Crowe and Robert M. Doran (Toronto: University of Toronto Press, 2013), 378.