The Metaphysics of Left and Right (No, Not Politics; the Freakin Directions!)

pbride_swords
“I am not left-handed…”

I think I am losing my mind.

A few years ago, I was reading yet another popular science book—I think it was Brian Greene’s The Elegant Universe—when I came across a reference to one of those barroom brain teasers. It goes like this: if your image in a mirror is reversed with regard to right/left and left/right, why is not also reversed from up/down and down/up?

The answer, while elementary, is surprisingly difficult to articulate. It helps to imagine yourself, not face-to-face with your reflection, but back-to-back, with the plane of the mirror between you and your mirror-doppelgänger. Now stick your arms out and waggle your fingers. For both you and your reflection, up is still up, and down is still down. This side (left to you, right to your doppelgänger) is still this side, and that side (right) is still that side.

The only real difference is that up/down has an objective definition; that is, which direction is the earth and which the sky. But left/right has a purely subjective definition, relative to whose set of eyes one is looking through.

Simple, right?

Continue reading “The Metaphysics of Left and Right (No, Not Politics; the Freakin Directions!)”

“I’m Probably Wrong About Everything” Podcast Interview

Many thanks to Gerry Fialka for interviewing me on his great podcast. I have no idea why he thought of me, but I’m glad he did. It was fun.

Yes, my lighting sucks. I’m working on it. Check it out anyway, pls…

Yes, You *Do* Have Free Will. So *Choose* to Read This Post

Photo by Vladislav Babienko on Unsplash

Like millions of others, my family and I have spent part of this year’s Christmas holiday watching some version of Charles Dickens’ A Christmas Carol. Actually, we watched two, starting with Bill Murray’s mad-cap Scrooged and following-up with a much darker made-for-TV film from 1999, starring Patrick Stewart. The production was inspired, in part, by Stewart’s one-man stage performances as the character, and Stuart gives a powerful, tragic interpretation of Scrooge, a man so consumed by his traumatic past that he is unable to experience any emotion other than anger, manifested as a chronic, toxic misanthropy.

A Christmas Carol is, of course, an unabashed Christian parable, perhaps the most influential in history outside the Bible itself. Scrooge is visited by ghosts over three nights (the same number as Christ lays dead in his crypt), until his “resurrection” on Christmas morning, having seen the error of his ways. But the story resonates with people of all faiths, or no faiths, because of its theme of hope. Scrooge is old, but he ain’t dead yet. There’s still time to fix his life. To change. To choose.

I have always thought that the power to choose–the divine gift of free will–lies at the heart of A Christmas Carol, as it does with all great literature. Of course, it’s hard to imagine Scrooge, after seeing the tragedies of his Christmases past, present, and future, to wake up on Christmas and say, “Meh, I’d rather keep being a ruthless businessman. Screw Tiny Tim.” But he could. He might. The ultimate choice given to us is the option to change the nature of our own hearts, our way of thinking.

This matter of free will seems particularly salient this year–this holiday season–because the very concept is under attack. If you Google the term “free will,” you will be presented with a barrage of links with titles like “Is Free Will an Illusion?” and “Is Free Will Compatible with Modern Physics?” Along with the rise of militant atheists like Richard Dawkins, a parallel trend has arisen among theoretical physicists who doubt that free will is even a meaningful concept. After all, if our consciousness is merely an emergent phenomenon of electrical impulses in our brains, and if our brains are, like everything else, determined by the laws of physics, then how is free will even a thing? Every idea we have—every notion—must somehow be predetermined by the notions that came before it, the action and reaction of synapses in our brains.

Our brains, in other words, are like computers. Mere calculators, whose order of operations could be rewound at any moment and replayed again and again and again, with exactly the same results.

Patrick Stewart as Scrooge

Ah, but what about quantum mechanics, you say? The principles that undergird all of quantum theory would seem to imply that human thought, even if you reduce it to electrons in the brain, might be on some level unpredictable, unknowable, and therefore capable of some aspect of free will. Not at all, reply the physicists. The scale at which Heisenberg’s Uncertainty Principle applies—the level of single electrons and other subatomic particles—lies so far below that of the electrochemical reactions in the human brain that their effect must be negligible. That is, a brain with an identical layout of neurons to mine would have exactly the same thoughts, the same personality, as I do. It would be me.

It’s this kind of reasoning that leads people to hate scientists at times, even people like me who normally worship scientists. The arrogance of the so-called “rationalist” argument—which comes primarily from physics, a field that, in the early 1990s, discovered that it could only explain 4% of everything in the universe—seems insufferable. But more to the point, I would argue that the rationalist rejection of free will leads to paradoxes—logical absurdities—not unlike those created by the time-travel thought problems that Einstein postulated over a hundred years ago.

For instance, imagine that one of our free-will denying physicists wins the Nobel Prize. He flies to Stockholm to pick up his award, at which point the King of Sweden says, “Not so fast, bub. You don’t really deserve any praise, because all of your discoveries were the inevitable consequence of the electrical impulses in your brain.”

“But what about all the hard work I put in?” the physicist sputters. “All the late nights in the lab? The leaps of intuition that came to me after countless hours of struggle?”

“Irrelevant,” says His Majesty. “You did all that work because your brain forced you too. Your thirst for knowledge, and also your fear of failure, were both manifestations of mechanicals in your brain. You had absolutely no choice in the matter.”

“Well, in that case,” replies the now angry physicist, “maybe YOU have no choice but to give me the award anyway, regardless.”

“Hmm,” muses the King. “I hadn’t thought of that.”

“So, can I have it?”

“I dunno. Let’s just stand here a minute and see what happens.”

As many critics have pointed out, this kind of materialist thinking inevitably leads to a kind of fatalism of the sort found in some eastern religions. If human beings really have no free will—that is, if we are basically automata in thrall to the physical activity of our brains—then what’s the use of struggle? Why bother trying to improve yourself, to become a productive member of society, or become a better person?

Straw man! scream the physicists. No one is advocating we give up the struggle to lead better lives. That would be the end of civilization. No, we simply mean that this struggle is an illusion, albeit one that we need to exist.

Okay. So, you’re saying that we all have to pretend to have free will in order to keep the trains running? We must maintain the illusion of free will in order to continue the orderly procession of existence? But doesn’t this position, itself, imply a kind of choice? After all, if we have no free will, it really makes no difference whether we maintain the illusion or not.

Doesn’t this very discussion represent a rejection of passivity and the meaningfulness of human will?

My fear is that many young people today will be overexposed to the “rationalism” I describe above, especially when it is put forth by otherwise brilliant people. For those who are already depressed by such assertions that free will is an illusion, I would direct you to the great stories of world history. All the enduring mythologies, from the Greek tragedies to the Arthurian legends to the Hindu Mahabharata, revolve around the choices made by their heroes, their triumphs and failings. As a fiction writer, I would argue that the concept of “story” itself is almost synonymous with choice. A boy is confronted by the wolf. Will the boy run left or right? Will he lead the wolf away from his friends back at the campsite, or will he lead the wolf to them, hoping they can help scare it away (or, more darkly, that it will eat one of his friends instead)?

One can also take hope in the fact that not only can physicists still not explain what 96% of the universe is but they can’t explain what consciousness is. Of course, some would argue that consciousness, itself, is an illusion. But this leads to an entirely new set of paradoxes and absurdities. (As David Bentley Hart once replied, “An illusion in what?”)

Personally, I suspect that consciousness comes to exist around or about the same moment in a specie’s evolution when the individual can choose. That is, consciousness implies a kind of choice. It might be a very elemental, even primal kind of choice—perhaps simply the choice of whether not to swim harder, or fight harder, which I believe even minnows and ants can make—but it’s still a choice, and not merely a matter of pure instinct.

One of my favorite TV shows from my childhood was Patrick McGoohan’s “The Prisoner”, whose every episode begins with the titular character proclaiming “I am not a number! I am a free man!” This assertion, shouted on a beach by the mysterious village in which he has been imprisoned, is followed by the sinister laughter of Number 2, the Orwellian figure who has been tasked with breaking the prisoner’s will. Number 2 is, of course, an awesome and terrifying figure, armed with all the weapons of modern society: technology, bureaucracy, and theory. But he’s still wrong, and he’s ultimately unable to grind the prisoner down.

That’s the hope I cling to, the Christmas message I espouse. Namely, that we’re all able to choose to resist the fatalism of rational materialism. That we can all, eventually, escape the village and be better human beings.

Anyway, that’s my Christmas Eve rant.

(Author’s Note: this is an updated version of a post that originally appeared on my old blog, Bakhtin’s Cigarettes.)

Time for an A.I. Sanity Check

Ever since the first publicly available AI SaaS offerings (that’s Software-as-a-Service for all you non-geeks) like ChatGTP hit the market, the media ecosystem has been in love with the subject of AI as a major disruptive force. Disruptive, that is, in the creative industries hitherto regarded as safe from any kind of automation: illustration, film-making, acting, and writing. Story after story has run about how AI-generated art, screenplays, journalistic articles, etc. might soon replace the work of human content creators. 

Within this maelstrom, a smaller, subset of articles has begun circulating related to whether AI will ever achieve consciousness. (Some experts believe it already has.) And, within this subset, there is a sub-subset devoted to what I call AI alarmism. That is, the idea that AI, if left to its own devices, might soon overthrow—and perhaps even exterminate—humanity itself, ala the “evil AI” tropes of the Terminator films, the Matrix films, the Tron films, et cetera, et cetera.

Such visions of an AI apocalypse are not new. Hal, the murderous supercomputer in Stanley Kubrick’s 2001: A Space Odyssey, is perhaps the most famous example of an AI gone bad. And a cool but largely forgotten movie from the 1970s called Colossus: The Forbin Project lays out exactly how a psychotic AI (in this case, one entrusted with the care and maintenance of the American nuclear arsenal, just like SkyNet) could take over the world by force. 

Continue reading “Time for an A.I. Sanity Check”

Random Dose of Optimism

(Yes, We Should Blast Moon Dust into Outer Space to Cool the Earth)

Recently I was enjoying a long-distance phone chat with an old friend of mine, and the conversation turned, as it inevitably does, to the weather. She lives in Ohio, I live in Florida, and yet our answers to our respective inquiries about “How’s the weather where you are” were identical: Hot AF.

Fortunately, scientists like David Keith have been telling us for years that we are not helpless in the battle against climate change. If worse comes to worst, for a few billion dollars we could deploy specialized aircraft to release particles of sulfur (or some more exotic material) into the upper atmosphere, thus reflecting enough sunlight back into space to cool the planet very quickly. Of course, as professor David warns, we have a poor grasp of what possible, global side-effects such a radical course of action might have (although one one wonders if these side-effects could be any worse than a Canada-sized wildfire or a continent-wide heat-wave in India). It is precisely because of these unknown side-effects, he explains, that we need to start thinking about the problem now, with a clear head. 

Along these lines, one of the strangest—and yet most encouraging—options to the “solar dimming” set of possible mitigation strategies is the idea that we might blast moondust into outer space. Yeah. For real. This dust, if aimed properly, would linger in one of the Lagrange points between the earth and the sun and, for a time, reduce solar radiation falling on the earth’s surface. The effect would be short-lived due to solar wind blowing the dust away into interplanetary space, but this is a good thing in that the technique would thus be throttleable. We could blast as much as or little dust as needed to cool the planet without plunging it inadvertently into a new ice age. (Have you seen that movie SnowPiercer?) Also, unlike the sulfur-in-the-sky option, the lunar dust wouldn’t contribute to air pollution or acid rain here on earth. 

Obviously, the notion that we might somehow shoot lunar dust into space on a routine, industrial scale seems like science fiction. But is it? The space agencies of many nations such as the U.S., China, and Japan have planned future missions to the moon. One can imagine a gradual infrastructure of settlements, supplies, and equipment gathering on the moon over time, much as one formed in the American West in the 19th Century. One could presumably build some kind of mass-driver or rail-gun that could shoot the dust into space, and power it with solar energy. (Extra power could be stored during the two-week long lunar “day” to keep the gun shooting during the “night”). 

How much would such a setup cost? Billions? Trillions? On the other hand, how much would it cost to rescue two-hundred million people from Europe if the Atlantic thermohaline circulation is disrupted, as some scientists predict it will? Or to build sea-walls around New York and Miami and San Diego and every other major coastal city? Or to feed South America if the crops there dry up during the next heat wave?

It’s time to think outside-the-box, people. 

If worse comes to worst, we shouldn’t rule out going back to the moon. And building a huge cannon there. Or anything else we have to do to cool off the planet. 

Here is the original article on SingularityHub where I learned about this idea:

Nerds in the News

An Aperiodic Monotile has been discovered. Hooray!

When I was in my twenties, I read Roger Penrose’s The Emperor’s New Mind and was blown away by it. That is, in the 5% or so that I could understand, I was blown away. Never have I read a science book that, paradoxically, filled me with hope and optimism. And awe.

The book is probably more timely today than ever. With all the hype about AI and machine learning, people are starting to freak out about humanity’s place in the future. 

Penrose’s main thesis, after all, is that human consciousness is not machine-like. Citing the work of brilliant people such as Kurt Gödel, Alan Turing, and himself, Penrose lays out an extremely compelling argument as to why computers—no, not even quantum computers—will ever really think, no less achieve actual consciousness. This conclusion enraged an army of science fiction fanboys and others who believe that the “the brain is a machine made of meat”. 

In building his argument, Penrose refers to examples of discoveries scientists and mathematicians have made that could not (in his opinion) have been discovered by any algorithmic process. One of these examples is his own rather brilliant work in the area of aperiodic tilings

Aperiodic tilings are something that even a STEM idiot like myself can understand. Anyone who has ever looked down at an intricately tiled parquet floor and wondered about the pattern can relate to this. Most floor patterns—even very complicated ones—will reveal themselves as repetitive if viewed from a sufficient height. But some patterns never repeat, even if you view them from the second floor or the fifteenth or Alpha Centauri. This aperiodicity can only be demonstrated, of course, via mathematical proof, which is often maddeningly complex in and of itself. Mathematicians are constantly seeking out new collections of tile shapes (which, paradoxically, are usually simple enough to cut out of a piece of construction paper with kiddie scissors) that yield these aperiodic tiles. 

In the 1970s, Penrose himself discovered an aperiodic tiling that used only two shapes—a “kite” and a “dart”. This was a record at the time since other aperiodic tilings had been discovered but they made use of more shapes. 

Knowing this, I read with some amusement that a new aperiodic tiling had recently been revealed that uses only one shape. A funky shape, surely, but still just one, thus making it an aperiodic monotile. The only wrinkle was that the shape had to be “flipped” at certain points for the tiling to work. 

Then, a few months later, lo and behold, another aperiodic monotile was discovered, and this one required no flipping. The dudes who found it were David Smith, Joseph Samuel Myers, Craig Kaplan and Chaim Goodman-Strauss of the University of Yorkshire.

Truly, this discovery has no impact whatsoever on my daily life, or yours I would bet. And yet it’s still really cool. This mathematical artifact has been hidden there for all eternity, and just now, in 2023, some nerds discover it.

That’s why I still have faith in humanity. The nerds. They will save us.

Author’s Note: hat-tip to the good people at openculture.com for bringing this news to my attention, and for posting the video that I have linked above.

Old Robot Cheats Death

If there’s one kind of story I’m a sucker for, it’s the has-been-makes-a-comeback. You know the formula: a once-great hero (i.e. athlete/cop/musician/artist) is down on his luck. They’re disrespected, lonely, and all but forgotten. But then, with the help of a much younger and optimistic (or older and wiser) companion, the hero gets a burst of inspiration. They discover that they still have vast, untapped powers, and through great discipline, courage, and sacrifice, they focus those powers on a new challenge. Then, at the climax of the tale, they face that challenge and triumph.

I have, of course, just described every single Rocky movie (yes, even Rocky II) as well as 10% of all the Hollywood movies ever made. My favorite cinematic example is a little movie from 2005 called The World’s Fastest Indian, starring Anthony Hopkins. But I tend to like any variation of the formula, even the most banal and overused variety.

Continue reading “Old Robot Cheats Death”