The conversations happening today in the field of artificial intelligence, known as AI, are completely mind-blowing. Aside from AI robots using 3D printing to build bridges in the Netherlands or cars in Los Angeles with digital nervous systems, the crucial topic of discussion is the unknown potentialities which AI technology could precipitate. The central question which belabors not only scientists and engineers but also economists, politicians, and Christians is ultimately: “What will happen once AI is let out of the box?” Despite the wide variety of speculation within AI scholarship and social media, everyone agrees that the future of AI is a frightening yet seductive mystery from which no one can look away. “AI could be terrible, and it could be great,” remarked Elon Musk, founder of Tesla Motors. “Only one thing is for sure,” he says. “We will not control it.”
The big idea within AI circles is the creation of a superhuman, God-like intelligence that will amplify human cognitive abilities to solve all the problems of the world. At its base, AI is software that writes itself. In theory then, superintelligence can be achieved if the right algorithms are developed which give AI the ability to self-improve. As the algorithms develop themselves and improve their coding throughout a vast network of global intelligence systems, eventually exponential leaps in intelligence will leap off those leaps and reach unbounded levels of computing power. Some thinkers foresee the ability to achieve twenty thousand years of human progress in a single week. The hopes and dreams of those from Silicon Valley to China converge upon harnessing the power of this intelligence, and the practical aspects of building things or implementing innovative solutions is a secondary problem which the superintelligence itself will solve. Whatever company or nation reaches this level of intelligence first will win the world, and either save or destroy humanity in a winner-takes-all scenario.
To be clear, the AI debates do not predict a Hollywood doomsday scenario in which robots become spontaneously malicious and start attacking humans. The more subtle danger is aligning AI’s values and goals with those of humans, what thinkers call the “alignment problem.” The difficulty of alignment is that as AI self-improves, it can behave in ways beyond the foresight of computer programmers. A programmer cannot write a safety patch for every unknown scenario in which an AI might act. For example, if an AI is told to drive someone to the airport, it could be programmed with the common sense needed to drive according to traffic laws. But if an AI developed the ability to fly all by itself, this could be a problem if a human is taken up into the stratosphere with insufficient oxygen. The safety patch of “don’t take humans into the stratosphere without an oxygen mask” is nowhere close to the mind of a computer programmer until it happens. More serious concerns arise if the AI were to design new goals for itself. “What if humans are judged as obstacles to those goals or are objectified to reach them?” Sam Harris asks. Theoretically, the cognitive power of AI technology could be hard to contain and may inevitably lead to weaponization.
If humanity loses control of AI, an obvious solution is to simply unplug or shoot it. (Note to the reader: robots are like zombies; aim for the head or face.) Thinkers have toyed with this solution in what’s called the “AI in a Box” scenario. The idea is simple: anyone building an AI should do so in a secure laboratory to prevent it from escaping into the wild. The laboratory would include an emergence failsafe switch to override a robot with a hard shutdown. This line of thinking, however, overlooks the fact that the AI in the box is superintelligent, and convincing a human to let it out could be like taking candy from a baby. The AI could easily concoct a clever douceur and, with a carrot on a stick, win its freedom through manipulation or bribery. Eventually, the projected intelligence of the AI assumes a development beyond the reliance on electrical power.
While make-believe scenarios of autonomous robots with personality disorders are amusing, the self-improving ability of AI is not something completely untenable. In fact, it is already here. Google, YouTube, and Amazon already have algorithms which can learn and adapt to user search preferences. Right now, the advanced robotic system at the Deep Mind Company is playing video games super-humanly well, which it learned by merely watching a screen. Researchers at New York’s Columbia University recently created a robot that became self-aware and learned things entirely from scratch, with no prior computer programing. Just thirty-five hours after its launch, the robot was able to build its own biomechanics, allowing it to pick up and drop objects, write with a marker, and repair damages to its own body.
With superintelligence on the horizon, mortality itself could theoretically be overcome through some kind of human-robot symbiosis. Superintelligence could presumably develop robotic prosthetics or some kind of elixir that prevents biological decay. Talk has begun of a digital self, created by a neural processor chip which acts as a tertiary cognition layer between the brain’s limbic system and cerebral cortex, enabling anyone to have superhuman cognition. If the biological self dies, a person could upload their “digital self” into a new computer. Overpopulation would be solved by turning something like Facebook into a permanent virtual homestead or employing the power of superintelligence to streamline space exploration and establish a multiplanet species.
The achievement of AI immortality would have profound impacts on central doctrines of the Christian faith. The doctrine of the Resurrection could one day appear like an awkwardly devised VCR or reel-to-reel cassette tape, outdated and completely laughable. Atheists like Daniel Dennett and Richard Dawkins would have empirical verification that evolution has finally outgrown the primitive impulse of religion. According to their logic, natural selection built beavers and bees in such a way that they could adapt to their surroundings to survive. It would follow, therefore, that the inbuilt survival mechanism for humans is their intelligence and the production of robots is the ultimate zenith of adaptation. Dennett and Dawkins could very well conclude that the concept of “God” was merely a metaphorical projection of humanity’s highest potential, and the kingdom of God prophesied in the Bible was really a foreshadowment of the kingdom of Robots. Or perhaps the Gospel accounts of the Resurrection were interpreted incorrectly all these years, and the figure of Christ is really an image for the deification of human consciousness, technuminously exalted with the omniscience of “God.”
The harrowing prospects of AI immortality could make the doctrine of the Resurrection potentially unnecessary. To profess such a belief would entail the deliberate decision to forego synthetic life and endure biological death. The Christian of the future would be viewed as the epitome of unreason, illogically adhering to pro-life and pro-mortem beliefs. Once blamed as obstacles to “choice” and “dignity,” Christians will then be, ironically, charged as adversaries to life without end.
However, if the doctrine of the Resurrection is properly understood, the promise of cybernetic impersonation wane in comparison to its eschatological counterpart. The doctrine of the Resurrection has nothing to do with the prolongation of temporal-historical existence. Pope Benedict XVI would describe such thinking as the “secularization of salvation,” which reduces human nature to a permanent state of gadgetry and highly advanced tools, as opposed to an elevation of human nature to participation in the divine life. Ultimately, the essence of the Resurrection is a difference in the quality of life rather than its duration. The New Testament uses two Greek words for life: bios and zoe. The former is carbon-based life, life that is organic, mutable, and subject to decay. The latter is divine life, life that is immutable, unchangeable, and eternal. Zoe is life in the raw, life’s life, the very life of God. At the Resurrection, this divine life will not only fuel human bodies and souls but will transforms all of creation.
Perhaps the best description of the future life of the Resurrection comes from C.S. Lewis’ book The Great Divorce, in which he distinguishes between the “shadow lands” and “ultimate reality.” The story begins when the narrator of the book, Lewis himself, is taken by a bus to the foothills of heaven. The passengers on the bus disembark into the most beautiful country they have ever seen. Yet curiously, every aspect of the landscape is different. “I bent down and tried to pluck a daisy,” Lewis says, but the “little flower was hard, not like wood or even like iron but like diamonds.” The heavenly world is made of an entirely different substance, so remarkably solid that the grass hurts Lewis’ feet when he tries to walk. The life of heaven is so real that it makes the former world a mere shadow in comparison, a world where everyone and everything overflows with the superabundance of divine life. “The glory flows into everyone, and back from everyone,” says an inhabitant of the land, “like light and mirrors.”
The eternal life of God constitutes the very quality of the Resurrection, a life that has no terminus or limit on either side—that is, no beginning or end. Boethius’ classic definition of eternity is the “complete possession all at once of illimitable life.” Boethius abstracts eternity from the forward succession of time, measured by a before and after, and replaces it with the “now” of time. The now of eternity is unbounded by time’s fleetingness, which does not separate into past and future. Eleonore Stump describes eternity as a “durational now” that persists indefinitely, a stable moment of pure existence, without change or intervals.
In chronological time, human beings do not have full possession of their lives all at once. A person at age fifty will not possess the life he once had at age three, nor the life he will have at age seventy. Life is experienced sequentially, little by little. Humans have only one moment of their life within the continuum of its totality, which is experienced as a now that is continually passing away. If time can be transcended, a person can have a now that endures always and does not change or separate into past and future. The future life of the Resurrection will be the complete possession of a person’s entire life all at once in an uninterrupted moment of divine sublimity. Augustine longed for this life in his Confessions, praying: “I have been divided amid times, and my thoughts, even the inmost bowels of my soul, are mangled with tumultuous varieties, until I flow together into You.”
The eternal now of resurrected life is entirely different from a static, lackluster world that eventually succumbs to the bland familiarity to which AI is predestined. Eternity is a moment that is continually fresh and new with the life of God, a savory moment blooming with illimitable vitality. Gregory of Nyssa likens the soul to a vessel that is continually expanding as the divine life flows into it. Rather than the vessel becoming full and overflowing, God enlarges the soul’s capacity to receive more and more divine life. Yet because God’s infinity always exceeds the soul’s capacity to be filled, the soul can never reach a satiety of the endless good. While the soul will be satisfied completely and rest in its final end, its desire will be enkindled always anew by the pleasures that lie beyond it. Gregory described eternal life as a paradoxical state of “insatiable satiety” wherein desire itself is the satisfaction. The soul lives in a felicitous tension between its ecstatic desires and their ever-more wondering fulfillment. Which means humans are continually in a state of young love with God, always at the beginning of their relationship, since the possibilities are infinite.
To whatever degree it is appropriate to use the word “danger” in describing AI technology in the world, it can certainly be applied to the spiritual effects it will have on the human person. The indefinite extension of temporal life by AI technology would impede the human person’s final end of union with God. The soul would be left suspended in an intermediary spiritual stasis of insatiable longing for the infinite, while artificially ordained to the finite, a space of interminable spiritual frustration, like a fish out of water, gasping for its ultimate life principle. John of the Cross wrote of this condition with particular antipathy in his spiritual lamentations, complaining that he was “dying that I do not die.” The litmus test for the viability of AI immortality is surely the desire for God and the spiritual needs of the human person, which are too big for this world.
While the entire AI conversation may be written off as wildly speculative and more than likely impossible, humanity awaits the event on the horizon. For people of faith, the superintelligence of God incarnate has already enacted a solution for the human condition, expanding human consciousness with the vision of eternity and inestimable spiritual delights. Without the hope of the Resurrection, the costs and benefits that result from AI curiosity will forever be too small.