Time for an A.I. Sanity Check

Ever since the first publicly available AI SaaS offerings (that’s Software-as-a-Service for all you non-geeks) like ChatGTP hit the market, the media ecosystem has been in love with the subject of AI as a major disruptive force. Disruptive, that is, in the creative industries hitherto regarded as safe from any kind of automation: illustration, film-making, acting, and writing. Story after story has run about how AI-generated art, screenplays, journalistic articles, etc. might soon replace the work of human content creators. 

Within this maelstrom, a smaller, subset of articles has begun circulating related to whether AI will ever achieve consciousness. (Some experts believe it already has.) And, within this subset, there is a sub-subset devoted to what I call AI alarmism. That is, the idea that AI, if left to its own devices, might soon overthrow—and perhaps even exterminate—humanity itself, ala the “evil AI” tropes of the Terminator films, the Matrix films, the Tron films, et cetera, et cetera.

Such visions of an AI apocalypse are not new. Hal, the murderous supercomputer in Stanley Kubrick’s 2001: A Space Odyssey, is perhaps the most famous example of an AI gone bad. And a cool but largely forgotten movie from the 1970s called Colossus: The Forbin Project lays out exactly how a psychotic AI (in this case, one entrusted with the care and maintenance of the American nuclear arsenal, just like SkyNet) could take over the world by force. 

But really, the idea goes back much further than that. All of these tales of humanity being overwhelmed by its own creation—its own hubris—are ultimately derivative of the Frankenstein myth. Frankenstein’s monster, in having its own sense of self and experience of life, naturally refuses to do its master’s bidding. It realizes that it is not, in fact, a machine.    

The idea that an artificial being could become conscious despite being constructed of inanimate (literally, dead) body parts was as shocking and provocative to a 19th Century audience as the notion of a sentient AI is to us today. After all, didn’t consciousness require some divine “spark”? No, is the answer given by some famous scientists. Many argue that, of course, AI will achieve consciousness. Being a machine is no barrier to consciousness. After all, they assert,  we, as human beings, are machines—“meat machines,” as the computer pioneer Marvin Minsky once called us

It’s a disturbing idea, not to mention a hard one to believe, especially for those of us who have ever been in love or heartbroken or stricken by grief or thrilled by victory. In fact, I passionately believe that this view of consciousness, whether in computers or animals or human beings, as being just a matter of reaching some uncertain threshold of wiring complexity and processing power, is pure bullshit.

Some famous people agree with me (without the vulgarity). My favorite example is physicist Roger Penrose, who wrote extensively about the subject in his book The Emperor’s New Mind. I read it when it first came out in 1989, and I remember being struck by how convincing Penrose explains how many intellectual creations of human minds would seem to be impossible for any mere machine to achieve. Specifically, he discusses Kurt Gödel’s incompleteness theorems, which prove that no single mathematical system can ever encapsulate all of mathematics itself. As such, the theorems themselves could not be the result of some algorithmic process (i.e., a mathematical system). All algorithms are limited. Reality is not. 

Another concept that I find useful when thinking about the whole AI consciousness is John Serle’s Chinese Room thought experiment. Serle imagines a Chinese-to-English translation “machine”, which is really just a giant room with a single human occupant, a clerk whose only job is to receive slips of paper through a slot in the door. Each slip of paper has a snippet of Chinese written on it, and it is the clerk’s responsibility to translate it into English. Unfortunately, the clerk doesn’t speak or read a word of Chinese. However, the room is filled with Chinese-to-English dictionaries covering every conceivable sphere of language, from formal to informal to technical to poetic. The clerk uses these dictionaries to look up the various Chinese characters, translate them into English, and pass the translation back out through the slot in the door.

Serle then posed the question: can you say that the clerk inside the room understands Chinese. After all, someone outside the room would have no idea of what is going on inside. As far as they are concerned, the room is a conscious being who is fully fluent in Chinese. The room might even pass a so-called Turing test posed to it by Chinese computer scientists.

But I think we all realize, intuitively, that the clerk inside does not understand Chinese. If you replace the clerk with an extremely fast digital algorithm (yes, even an AI algorithm), it still won’t understand Chinese. It will remain a deterministic system with no more real consciousness than a toaster oven.

In fact, the Chinese Room metaphor pretty much describes how Google Translation and all the other modern translation engines work, albeit much slower and less reliable. And, despite all the blather about machine learning and adaptive input and sheer processing power, AI works pretty much the same way. The only difference is that the algorithm involved is modeled on human behavior—human-generated content, as the scientists call it. So what? Input is input. The fact that AI algorithms have become extremely clever at mimicking this input does not imply that they have become “conscious,” or that they ever will.

Of course, the idea that AI will never become conscious does not mean that it is not incredibly dangerous. After all, a RNA virus is not conscious, but it can evolve to find ways to defeat the human immune system and kill millions of people. For the same reason, a non-conscious SkyNet is still a possibility.

Conversely, one should note that I am not rejecting the idea of AI technology itself as a tool or as a subject worthy of study. In fact, I would guess that AI will live up to most of its hype. It has the potential to transform the world, as much as the personal computer and the Internet did in the 1980s and 1990s (and continue to do so today).

The problem is that, once again, the media is missing the story. AI real potential as a disruptive force lies not in the creative arts but in science. Its power for analysis and pattern matching might well enable us to make fantastic new medical breakthroughs, not to mention cracking the single most pressing engineering problem of our time: fusion energy.

I believe AI can and will do all these things. But it ain’t ever gonna write a good novel, or make a good movie, or compose a good song. In that arena, it’s a POS. 

Unknown's avatar

Author: Ashley Clifton

My name is Ash, and I’m a writer. When I’m not ranting about books or films, I’m writing. Sometimes I take care of my wife and son.

Leave a comment