What I’m (Re-)Reading: “The Emperor’s New Mind”

It was a late-summer day in 1990. I was teaching two sections of English Composition 101 at the University of Arizona, and it was the first day of the semester. My second class was in the afternoon, three o’clock. Being a very young Teaching Assistant, I had dressed in my best slacks, dress-shirt, tie, and loafers in a rather comical effort to earn my students’ respect (or, at least, their forbearance). My outfit was also completely inappropriate to the 100-degree-plus temperatures outside, which I felt even more than usual because my classroom for that section was across campus, so I had to schlep it from the dark, air-conditioned office that I shared with a two-dozen other T.A.’s in the basement of the Modern Languages building.

The first class went well. I liked the students, and they seemed to like me. When class was over, I gathered my things into my backpack and headed out. The moment I stepped outside, though, I knew I wasn’t going anywhere. A late summer rainstorm—monsoons, as they are called out there—had struck. I went back into the building and waited until the rain finally stopped. Unfortunately, Tucson, like every other desert city, is prone to flooding, and I knew the streets and even the sidewalks would be swamped for hours. Not wanting to wait that long, I took off my jacket and put it in my backpack. I did the same with my socks and shoes, then rolled up the cuffs on my slacks. Thus barefoot, I ventured out, sloshing my way across campus and then out into the little urban neighborhood where I had parked my car. 

Roger Penrose

I didn’t mind, in part because I had something to read, a paperback copy of Roger Penrose’s new book The Emperor’s New Mind. Yes, it’s one of those rare books that, even in chapter one, become so engrossing that one will read while walking through the streets after a monsoon, barely noticing the cold water that’s up to your ankles. I held it in front of me as I walked the familiar route, absorbed in the slow, methodical, yet miraculous argument that Penrose was weaving, which is simple yet dumbfounding: There is something non-mechanical (that is, non-algorithmic) about consciousness. 

That is, our brains are not merely “machines made of meat,” to paraphrase the words of AI pioneer Marvin Minski. Minksi is, of course, a founding proponent of the “Hard-AI” theory of computer science, which states that the brain is really just a very complicated computer, which, though made of biological parts, is nonetheless executing an algorithm. In the future—the theory goes—when digital computers become sophisticated enough to execute this mysterious algorithm, they will become “conscious,” too. Just like us. At this point in our technological evolution (the vaunted Singularity), machine consciousness and human consciousness will blur together, such that human beings might wish to “upload” their consciousness (i.e., all their thoughts, memories, desires, etc.) to a computer as digital data, and thus achieve immortality in cyberspace.

It’s an idea that would have seemed absurd—if not incomprehensible—a hundred years ago, but which has been gaining traction since the 1960s. Now, in the age of Generative AI—whose power really does seem miraculous, at time—the notion seems almost a given. A fait accompli. 

Yet, if you’re like me, you’ve thought to yourself: This is all bullshit. The human mind is not a computer; and computers—as we now understand them—will never be conscious. To suggest otherwise is a category error. 

However, you’ve probably kept this thought to yourself (deep inside your consciousness, as it were, heh heh) for fear of being ridiculed by the tech-bros and computer nerds in your office or classroom or wherever. Guys who not only completely subscribe to the Hard-AI theory, they read (and sometimes write) sci-fi novels about it. They not only believe in the Singularity; they look forward to it! 

Alan Turing

Not me. Whenever I hear one of these bros blathering on about Skynet becoming self-aware or uploading their consciousness to a computer, I just say, “Roger Penrose says it’s impossible.” At which point the bro in question will usually give me a befuddled look, as if to say, Who the fuck is Roger Penrose? Which is, of course, a kind of tragedy in and of itself. I usually answer: “Roger Penrose was Stephen Hawking’s partner in theoretical physics.” That shuts the bro up for a bit. After all, anyone smart enough to hang with Stephen Hawking is surely a force to be reckoned with. You can’t as easily dismiss a theoretical physicist of that stature, even when he is opining on a subject—computers—that might seem a bit outside his field. 

In fact, thinking about thinking is right up Penrose’s alley, so to speak. He’s not just a theoretical physicist; he’s a mathematical theoretical physicist, which means that he’s is a world-class mathematician whose creations are designed to help us understand cosmology, particle theory, etc. He shared the Wolf Prize in Physics with Stephen Hawking for their work on the Penrose–Hawking singularity theorems, and he is the discoverer of the first known aperiodic tile, the now-famous Penrose Tiling. (Many other aperiodic tiles have been discovered since, at least one of which I have posted about.) His other accomplishments are too numerous to mention.

So, Penrose brings a considerable amount of street cred when he finally makes his main assertion in the book—namely, that there is some as-yet unknown quality about consciousness that computers do not possess, and probably never will. He writes, “…there must be something essentially non-algorithmic about consciousness.” He further writes,

When I assert my own belief that true intelligence requires consciousness, I am implicitly suggesting (since I do not believe the strong-AI contention that the mere enaction of an algorithm would evoke consciousness) that intelligence cannot be properly simulated by algorithmic means, i.e. by a computer, in the sense that we use that term today.

He doesn’t fully come out and say until last few chapters of this (very long) treatise. The bulk of the book is dedicated to the evidence he has for his belief, beginning with discussion of Turing Machines. There are a lot of great videos about Turing Machines on YouTube, but suffice to say that it is a very simple machine imagined by Alan Turing (yeah, Benedict Cumberbatch played him in the movie) in the 1930s. It consists solely of a reading head through which a long—infinitely long, if need be—magnetic tape is run back and forth. A mechanism inside the head is capable of a few simple operations: it can move the tape forward/backward X places; it can read a symbol off the tape in the current position; it can write a symbol to the current position; and it can store the last symbol read in a single variable (called a “state” by Turing). 

If this sounds vaguely familiar, it should. A Turing Machine is basically an abstract, idealized version of a modern, programmable computer. If you replace the magnetic tape with a hard drive, you essentially have a modern computer—albeit one with only 1 byte of RAM (or thereabouts) and a very simple CPU. Even so, with a sufficient amount of tape, a Turing Machine could run any computer program in existence, even those used currently by AI networks. (It would, of course, do so very, very slowly.)

Kurt Gödel

With this in mind, Turing set out to determine whether one could, eventually, make a Turing Machine that could solve any math problem. That is, could the science of mathematics ever become so advanced, so perfect, so complete, that one could use it to write an algorithm (which would be encoded on the tape) that could decide if any mathematical statement (also encoded on the tape) were true or false. 

If so, then one would never need another, more advanced calculator. It would be a universal computational device.

Sadly (or, rather, happily, in my opinion), Turing was able to prove definitively that this was not possible. One can never—not in a million years—devise an algorithm so smart it can decide the truth or falsity of any math problem. The logic he used to prove this fact is encapsulated in a scenario now called the Halting problem (on which there is at least one excellent video on Youtube here). 

Turing’s proof was the final nail in the coffin of David Hilbert’s Entscheidungsproblem, which posed the question of whether it was possible to make a perfect, complete mathematical algorithm. Two-thirds of the question had already been solved (in the negative) by Kurt Gödel’s Incompleteness theorems, which rocked the worlds of math and logic in 1931. Penrose also delves into these theorems at great length, demonstrating how Gödel proved that no set of axioms can prove the truth or falsity of any mathematical statement. In other words, no single set of statements (no algorithm), no matter how large or sophisticated, will ever be able to solve every math problem. 

Penrose’s main point, however, goes beyond even this revelation. He asserts that Gödel’s theorems, by proving that no single algorithm can solve any problem, could not, themselves, be the product of any algorithm system! In other words, Gödel was not, himself, a computer running some incredibly elaborate, highly-evolved “consciousness algorithm.” Rather, Mr. Gödel, like all conscious beings, was…something else. Penrose writes,

Let us recall the arguments given in Chapter 4 establishing Gödel’s theorem and its relation to computability. It was shown there that whatever (sufficiently extensive) algorithm a mathematician might use to establish mathematical truth – or, what amounts to the same thing,1 whatever formal system he* might adopt as providing his criterion of truth – there will always be mathematical propositions, such as the explicit Gödel proposition Pk (k) of the system…, that his algorithm cannot provide an answer for. If the workings of the mathematician’s mind are entirely algorithmic, then the algorithm (or formal system) that he actually uses to form his judgements is not capable of dealing with the proposition Pk (k) constructed from his personal algorithm. Nevertheless, we can (in principle) see that Pk(k) is actually true! This would seem to provide him with a contradiction, since he ought to be able to see that also. Perhaps this indicates that the mathematician was not using an algorithm at all!

But if Gödel’s brain isn’t just a computer, what is it? Where does the magic of consciousness come from?

Well, nobody knows—not even Penrose himself, as he readily admits. Unsurprisingly, he does not reach for some mystical, supernatural answer (as I, ultimately, do). Rather, he argues—very convincingly—that consciousness might have some relationship to quantum mechanics. That is, there might well be some quantum mechanical aspect to minds, both human and animal.

This is not to say, of course, that quantum mechanics (QM) necessarily causes consciousness. Rather, Penrose merely suggests that there is some deep relationship between QM and consciousness. It’s a pretty cool idea, which he spends the last fifth of the book elaborating. 

When the book came out, critics immediately howled that Penrose is not a neurologist, nor an expert on the human brain, and that no one has (yet) found a truly quantum mechanical action in the brain. Still, there is something incredibly seductive—even uplifting, I would say—to the idea there really is something “magic” about us, our experience of life, which is still inexplicable to science. Still defensible, that is, from the brutal, deterministic nihilism of the New Atheists and the hard-core scientific materialists.

At least, I think so. Check it out.

The Physics of Left and Right

A few years ago, I wrote a post called “The Metaphysics of Left and Right (No, Not Politics; the Freakin Directions!),” in which I suggested that our ability to tell left from right, even as young children, and when facing a completely symmetrical landscape like the plane of the ocean, is somehow suggestive of a deeper truth about the nature of consciousness and dualism. As part of this philosophical (and very possibly harebrained) rumination, I explored how difficult it would be to communicate our definitions of left and right to a distant extraterrestrial civilization using only words or very simple pictures. One way, I argued, would be to use the inherent chirality of certain molecules, whose structure our hypothetical E.T. would recognize.

So, you can imagine my delight at discovering that one of my favorite YouTube channels, PBS Space Time, devoted an entire episode to this very subject. As host Matt O’Dowd explains, all biological life displays a mysterious homochirality, or bias toward either left- or right-handedness over the other. All DNA, for example, spirals the same way. Similarly, all sugar and amino acid molecules are not only chiral but unflippable. That is, their mirror images are never found in nature. And when these reversed molecules are deliberately created in a lab, they are biologically useless and, often, highly toxic. Not to mention totally…unnatural.

Even now, in the 21st Century, no one is sure why this handedness in nature exists, although it is theorized that it might be connected to the most fundamental laws of nature—specifically, the asymmetry of the weak force.

It’s pretty cool. Check it out….

Random Dose of Optimism: New Year’s Edition 2026

David Baillot | UC San Diego (CC BY-NC-SA 4.0)

Have you ever noticed that, at any given time, the tech bros and sci-fi nerds of the world are obsessed with one current, real-world technology. Right now, it’s AI. A few years ago, it was cryptocurrency. The topic itself changes over time, but whatever it is, they can’t stop talking about it.

I’m a bit of a nerd myself, but I must confess that I was never much intrigued by cryptocurrency, and I am only mildly interested in AI. Rather, my technological obsession is the same as it was when I was in high school: controlled fusion energy, a.k.a. fusion.

Fusion was a staple of almost every sci-fi book of the 1970s and ‘80s in which space travel or future civilization was described. Heck, even Star Trek’s U.S.S. Enterprise uses fusion to power its impulse engines. That’s why nerds of a certain age were so bewitched by the idea, and we still are.

But the idea itself isn’t science fiction—at least, not for much longer.

Fusion’s potential as the ultimate, clean power source has been understood since the 1940s. The required fuel is ubiquitous (basically water), the radioactive waste negligible (much lower volume and shorter-lived than fission waste), the risk of a meltdown non-existent (uncontrolled fusion reactions don’t ramp-up; they snuff-out), and the maximum power potential unlimited (fusion literally powers the stars).

The very idea of a world powered by clean, cheap fusion energy is enough to make a nerd’s eyes twinkle. (Well, this nerd, anyway.) No more oil wars. Fossil fuels would be worthless. We could use all the extra power for next-gen construction, manufacturing, water desalination, enhanced food production, and on and on and on. Best of all, we could start actively removing all the CO2 that we’ve been pumping into the earth’s atmosphere for 300 years.

Of course, a good bit of that power windfall will probably go to AI data centers, whose appetite for energy seems insatiable. And growing. Whatever your feelings are regarding the AI revolution, it is going to be one of the most important, disruptive, and consequential developments of human history, second only to the invention of the digital computer. 

We’ll need fusion to power it.

So, I find it pleasingly ironic that AI might turn out to play a role in the mastery of fusion energy itself. I learned of this from an article on the World Economic Forum’s website, entitled “How AI will help get fusion from the lab to the grid by the 2030s”. To grasp the gist of the article, however, one should first understand how incredibly, maddeningly, ridiculously difficult controlled fusion is.

Fusion works by pressing atoms (of hydrogen, usually) together at enormous pressure—so enormous that it can overcome the mutual repulsion of these atoms and cause them to fuse and form a bigger atom (helium), while “sweating” a photon or two in the process.

This photon sweat is the bounty of the fusion energy, and it’s YUGE. Unfortunately, the process of squeezing a hydrogen plasma into a tight enough space for a long enough period of time at millions of degrees Celsius, without it leaking out the side or, worse, squirting off into the walls of your reactor and melting everything, is damned hard. You remember those prank spring snakes that pop out of a can when you open a lid? Imagine cramming a billion of those snakes into a can the size of a thimble and you’ll have some idea of the challenge.

Taming a fusion plasma is so hard, in fact, that it well be one of those hyper-intensive tasks mere human beings—with our leaden reflexes, sluggishly throwing switches and pushing buttons—might not be able to manage.

For an analogy, I often think of the F-117 Nighthawk, the first true “stealth” bomber produced back in the 1980s. The Nighthawk didn’t look like a regular airplane because it wasn’t a regular airplane. Rather, the distinctive, saw-tooth pattern of its wings and fuselage, which was the essence of its radar-evading design, made it look ungainly. And, indeed, it was ungainly, so much so that no human pilot could fly it unaided. Instead, an on-board computer was required to make constant corrections, microsecond by microsecond, to keep the plane in the air and on target.

Controlling a nuclear plasma is, I suspect, a lot like flying a stealth-bomber; constant corrections are needed to keep the fluid stable. And they need to happen much faster than a human being can comprehend, no less attend to.

Enter AI.

As we all should know by now, you can teach an AI how to do almost anything—including (we hope) how to maintain a fusion plasma. As the article I mentioned above explains, a partnership has been created between the private company Commonwealth Fusion Systems (CFS) and AI research company Google DeepMind to do just that. One of the more notable achievements of this partnership so far is the creation of a fusion plasma simulator called TORAX, which could be used to train an AI.

Of course, I have no idea if this partnership will turn out to be fruitful. For that matter, I have no idea if we will ever, truly, crack the fusion code once and for all. But I think we will. And I’m not alone. As one expert, Jean Paul Allain, states in the article, “Fusion is real, near and ready for coordinated action.” In other words, fusion might soon be a real thing. For this reason, capitalists have caught the fusion bug and are funding dozens, if not hundreds, of related start-ups, including CFS.

In some ways, this fusion mania is reminiscent of the very earliest days of aviation (way earlier than the Nighthawk). Back in 1908 or so, there were literally hundreds of amateur aviators in Europe, desperately trying to master the trick of powered flight. Many of these enthusiasts were smart, self-funded, and brave. But their craft were not much better than cannonballs with wings, unable to turn or steer, or even stay in the air for very long. Sure, they had all heard rumors of a possible breakthrough that might have been achieved by those bicycle-shop boys, the Wright brothers, over in the U.S., but no one knew exactly what had happened. And they certainly hadn’t seen the proof.

Then, on August 8, 1908, Wilbur Wright brought the proof.

At an exhibition in Le Mans, France, Wilbur flew his and Orville’s latest model over the famous racecourse, remaining in-flight for a full one minute and 45 seconds. More important than the duration, though, was the fact that he could steer the airplane, demonstrating banked turns, climbs, and dives.

Three years later, he flew a newer model over the same racecourse for 31 minutes and 25 seconds.

The world had changed.

The same kind of progression is now happening in fusion. In 2024, Korea’s KSTAR tokamak sustained plasma for 102 seconds. In February of 2025, the WEST in France sustained a plasma for 22 minutes. Each year or so, the record gets longer, and the plasma becomes more stable. And all this is happening before the ITER mega-reactor has even come on-line (as it is expected to do this year).

One of these days, fusion is going to take off and never land.

And the world will change. Again.

The Coolest Discovery You’ve Never Heard Of

I recently learned that this year’s Nobel Prize in Physics went to a team of scientists who conducted experiments on quantum tunneling. Their experiments were conducted in the 1980s, which is typical of how the Nobel committees work—it takes around thirty years for a scientific consensus to build that a body of work was truly worthy of a Nobel Prize. 

I was interested in this news because, like most sci-fi nerds, I have an unflagging fascination with quantum mechanics. Heck, I even have a passing understanding of the fundamentals. (No, not just from Star Trek; I’ve read a few actual books! With facts, and stuff!)  A few years ago, I even tried to write a non-fiction book about Bell’s Theorem, which is a famous consent in Quantum Mechanics, albeit one that  you’ve probably never heard of (unless you’re a physicist or a science teacher or a sci-fi nerd). 

John Stewart Bell (copyright CERN)

To be frank, I had never heard of it either, until I read about it in a science book and then ventured to the Wikipedia page, where I learned that the theorem was written by an Anglo-Irish physicist named John Stewart Bell in the 1960s, and it hit the scientific community like a hurricane. Later, in 1975, another physicist Henry Stapp called it “the most profound discovery of science.

When I read this quote, I thought, “Whoa, dude! If it’s really the most ‘profound discovery of science’,” I should probably learn something about it.” 

And I did. Sort of.

Obviously, I will never really understand the underlying math, or even the root concepts that the math represents (which is one reason I abandoned the aforementioned book project). But the theorem itself is pretty easy to understand….

Continue reading “The Coolest Discovery You’ve Never Heard Of”