Recommended

CP VOICES

Engaging views and analysis from outside contributors on the issues affecting society and faith today.

CP VOICES do not necessarily reflect the views of The Christian Post. Opinions expressed are solely those of the author(s).

The measure of a machine: Is LaMDA a person?

The humanoid android robot Alter recreates human movements at the Mirakian museum in Tokyo.
The humanoid android robot Alter recreates human movements at the Mirakian museum in Tokyo. | Unsplash/Maximalfocus

In June 2022, Google suspended engineer Blake Lemoine from his work in artificial intelligence. Having previously assisted with a program called the Language Models for Dialog Applications (LaMDA), Lemoine was placed on leave after publishing confidential information about the project. Lemoine himself disputes this description, saying, “All I talked to other people about was my conversations with a coworker.”

Complicating matters, that “coworker” is LaMDA itself.

LaMDA is Google’s latest conversation-generating artificial intelligence. If assigned virtually any identity — such as, say, “you are Tom Cruise,” or “you are secretly a squirrel” — it offers in-character conversation, patterning its responses on databases of real conversations and related information. Its dialogue is extremely sophisticated; LaMDA answers questions, composes poems, and expresses concern at being switched off. Lemoine claims that this behavior shows that LaMDA is a sentient person, and therefore not Google’s property. The company, and many experts, disagree. The claim, however, points to a fundamental question: if a computer program was a person, how would one tell?

Lemoine’s argument follows reasoning first introduced by Alan Turing, a father of AI and of computation in general. By 1950, Turing had observed a pattern in computational research. Skeptical observers would declare that only a thinking human could accomplish some task — i.e., draw a picture, outwit another human, and so forth — only to propose a new, more stringent requirement when a computer achieved the first. Turing proposed a broader metric for intelligence; if an AI could converse indistinguishably from ordinary humans, it should be believed capable of true thought. After all, humans cannot directly detect sentience in each other, and yet typically assume that the people they converse with are precisely that: people.

Anyone fooled by a “robo-caller” can attest that even simple programs may briefly appear human, but the Turing Test as a whole remains a robust challenge. While LaMDA’s breadth of interaction is astounding, the program still shows conversational seams. The AI’s memory is limited in some substantial ways, and it is prone to insisting on obvious falsehoods — its history as a school teacher, for example — even when speaking to its own development team. While it often uses the right vocabulary, its arguments’ structure sometimes degenerates into nonsense.

Still, these things might not be disqualifying. Human beings obviously lie or argue badly; most people would likely not question the self-awareness of another human who said the things that LaMDA does. Indeed, Lemoine argues that, by judging LaMDA’s utterances differently from those of biological humans, observers exhibit “hydrocarbon bigotry.”

More fundamentally, conversation alone is a poor way of measuring self-awareness. The most famous critic of Turing’s “imitation game” is the philosopher John Searle, who proposed a thought experiment called the Chinese Room. Searle imagined a sealed room; outside, a Chinese-language speaker composes messages and passes them in through a mail slot. Inside, a second participant receives the messages but cannot read them. With him in the room, though, are a stack of books defining a series of rules: “If you see such-and-such Chinese characters, write this-and-that characters in response.” Obediently, the man in the room does so, copying a reply and passing it back out. From the perspective of the Chinese speaker, the exchanges are a sensible conversation; from the perspective of the person inside, they are a meaningless exchange of symbols.

Herein lies the flaw in conversation-based measures of intelligence. By definition, any computer program can be reduced to a series of input/output rules like the books in Searle’s imaginary room. An AI, then, simply follows its set of symbol-manipulation rules, forming words and sentences as instructed by the rules, without regard for semantics or comprehension. Any sense of meaning is thus imposed by the speaker “outside” the room: the human user.

LaMDA, of course, does not have simple rules of the form Searle pictures; no database of canned replies could suffice for its purposes. But the program’s operation is still ultimately reducible to a finite description of that form: given these symbols, take those actions. Indeed, a sufficiently motivated programmer could (very slowly) trace LaMDA’s operation entirely with pencil and paper, with no computer required, and produce identical results. Where, then, is the purported artificial person?

One might object that the same could be said of a human being. In principle, perhaps a dedicated biologist could trace every fluctuation of hormones and electricity in the brain, entirely describing its inputs and outputs. Doing so would not, presumably, deny that humans experience meaning. But this argument begs the question; it assumes that the mind is reducible to the brain, or more broadly, that human personhood reduces to physical properties. Indeed, the seeming inexplicability of consciousness in purely physical terms has earned a name in philosophy: “the hard problem.”

Christianity may be well-positioned to offer a better answer. Most Christians have historically understood personhood to depend on more than physical traits or conversational capabilities; unborn infants, then, are persons, while artificial intelligences are not. A robust defense of this understanding might be attractive — and, indeed, might offer valuable insight.

Unfortunately, despite statements from groups like the Southern Baptists and the Roman Catholic Church, the Church as a whole has been sluggish to respond to the theological questions of AI. LaMDA is not a final endpoint, and coming years will likely see many more who share Lemoine’s convictions. Increasingly, the Church’s rising challenges share a common need for a rich anthropology: a biblical defense of what, precisely, it is to be human.

Brian J. Dellinger is an associate professor of computer science at Grove City College. 

Was this article helpful?

Help keep The Christian Post free for everyone.

By making a recurring donation or a one-time donation of any amount, you're helping to keep CP's articles free and accessible for everyone.

We’re sorry to hear that.

Hope you’ll give us another try and check out some other articles. Return to homepage.

Most Popular