Can computers understand us? Humans use language in order to communicate, which classically was thought to be a uniquely human ability. With the rise of computers and more recently of advanced artificial intelligence systems, some have asked whether computers too can use and understand language. This question has come to the forefront recently after OpenAI released their new chatbot, ChaptGPT, to the public. The new OpenAI bot is extremely impressive and can answer complex questions, translate texts into ancient languages, and even write poetry! Many of the leading cognitive and computer scientists have been impressed by what this new technology can do.
Surely, it would seem, that something that can chat, answer questions, and write poetry has at least some understanding of language. Perhaps soon systems like it will not only understand language but become more intelligent than humans! In fact, some of the brightest minds continue to herald that AI will become so great that it might overtake mankind and become an extremely powerful entity that possesses “superintelligence,” or intelligence that vastly surpasses human intelligence in every way. Some, like former Google self-driving car engineer Anthony Levandowski, are explicitly seeking for humans to worship an AI “god” in the future and have formally filed papers to establish a religion around artificial intelligence. In light of this newfound religious interest in AI, it is worthwhile to visit the topic of idolatry.
Idolatry is a sin nearly as old as man. In the Old Testament, we find numerous stories of the people of Israel falling into the sin of idolatry. At its core, idolatry is the attribution of deity to something that is created. The confusing of the relationship between Creator and creature is what Bishop Barron calls the “most fundamental distortion” from which follows “every other form of moral and spiritual dysfunction.” Needless to say, idolatry is a very grave thing and we must root it out wherever we find it.
Idolatry in the Old Testament took different forms but often involved humans attributing deity to things that humans themselves made. This kind of idolatry is mocked by the prophet Isaiah:
Half of it he burns in the fire. Over the half he eats meat; he roasts it and is satisfied. Also he warms himself and says, “Aha, I am warm, I have seen the fire!” And the rest of it he makes into a god, his idol, and falls down to it and worships it. He prays to it and says, “Deliver me, for you are my god!”Isa. 44:16-17
Here, Isaiah mocks a man worshiping a wooden figure as a god. While worshiping a piece of wood might seem silly to most contemporary individuals, worshiping or at least attributing a quasi-divine status to artificial intelligence may not seem as silly. After all, AI systems, unlike wooden blocks, seem to be intelligent. They seem to understand language, reason, and many think they might become more intelligent than we are. However, while computers aren’t wooden, just like the wooden figure in Isaiah 44, they are creations of man and nothing more. To show this, it is important to look toward the philosophy of mind to see why such strong claims made by some AI proponents today are mistaken. As it turns out, computers running computer programs are incapable of even understanding language. This can be demonstrated rather easily by a simple illustration.
Searle’s Chinese Room
John Searle, a philosopher of mind, developed a simple thought experiment in 1980 to show that while computers might seem to understand language, they in fact do not. His illustration goes something like this:
Imagine you are in a room with lots of filing cabinets and papers. You have a detailed set of instructions written in a language you can understand (English, for our purposes) and you are tasked with giving answers to questions that are asked in Chinese. Questions written in Chinese are slipped into the room through a slot in the door. You are supposed to take the question and follow your set of instructions using the resources of the room to compile Chinese characters and “respond” to the question with an answer written in Chinese. Your “answer” is entirely determined by the instructions and the question received. You yourself have no knowledge of Chinese but the instructions you have are so clever that just by following them and opening the right draws and filing cabinets, arranging the characters the right way, you can produce answers as good as answers that a person who knows Chinese would give! Thus, the people outside the room might be unaware that their questions are being answered by a person who has no knowledge of Chinese.
Obviously, your job of finding and arranging Chinese characters is something that could be done (likely more rapidly) by a computer. So then, if a computer instead of you were answering the Chinese questions, would that show that the computer understood Chinese? Clearly not, since we assumed that the person involved had no knowledge of Chinese. The illustration shows that even if a program were cleverly devised so as to mimic human answers, the computer (or person) running the program would not necessarily have to understand the questions or the answers.
The Key Distinction
Searle’s example relies upon the difference between semantics and syntax. Semantics refers to the meaning of words. Words like “pool,” “fast,” and “London” describe or refer to things in the world. They have real meanings that humans grasp in order to use those words to converse. To convey the meaning correctly, they must be ordered properly or put into the right syntax. For instance, the sentence “John likes to swim fast in his pool in London” conveys a certain idea, while the sentence “fast London John swim likes his in pool to” conveys virtually nothing and is gibberish. The words are the same in both, but their arrangement is different. Again, the sentence “John likes to swim fast in his microwave in London” makes no sense, not because of improper syntax but because a microwave is not something in which a person can swim. Anyone who understands the meaning of the words “swim” and “microwave” would know that the sentence didn’t make sense.
The difference between syntax and semantics is important, Searle notes, because a computer program is merely syntactic and not semantic. All the computer (or human in the Chinese Room) is doing is following a formulaic system of arranging symbols. Those symbols have no meaning or semantic content for the computer or to the human in the room. Thus, since humans do understand language, this shows that the human mind is not merely a computer program. As Searle says: “Because programs are defined purely formally or syntactically, and because minds have an intrinsic mental content, it follows immediately that the program by itself cannot constitute the mind.”
As Searle’s Chinese Room illustration helps to show, while AI programs may be impressive, they are not conscious, intelligent beings with understanding of language, the world, etc. Rather, computers are machines that are extremely efficient at manipulating symbols according to some program. This in no way denigrates the great things that artificial intelligence has helped humans accomplish. Undoubtedly, AI can help humans to solve a variety of problems as it continues to be refined and utilized. However, regardless of how useful and impressive AI might seem, a little philosophical reflection can show that a computer program is not capable of what the human mind is. AI is a manmade thing, and it should be treated as such. Any attempt to deify artificial intelligence is gravely mistaken.
Since human nature hasn’t changed, the temptation toward idolatry, albeit in ever-novel forms, will always be present. As Christians we must recognize the nature of AI and realize that, despite appearances, machines are not capable of understanding and reasoning in the way humans are, let alone God. We must avoid the temptation to attribute divine or even quasi-divine status to something that we have created, no matter how impressive it is or may become. Thankfully, God has given us our reason so that even if we don’t attend to his revelation or are unaware of it, a little philosophy can help us see the true nature of AI.