Everything you need to know about alt-rock in the 1980s can be learned from listening to four bands: The Smiths, Echo & the Bunnymen, Siouxsie and the Banshees, and The Cure. I’ve featured all of these bands except The Cure, so it’s about time, especially considering that they were, in some ways, the most innovative and versatile of the four.
Most people know the song “Just Like Heaven“, and they should because it’s a masterpiece. But I love this song, too, because it’s so strange and powerful. With Robert Smith singing on the edge of his vocal range, his voice breaking and whinging like the embodiment of every teenage neurosis you can think of, “Why Can’t I Be You?” is the ultimate song about Nerd Love.
It’s also a great dance song. (Yes, a danceable goth-rock song. Who knew?) And the horn section is epic. (A goth-rock song with horns? Yes, again!)
It is almost a law of nature that if you scroll through Twitter for long enough, you will run across a Star Trek meme. And, if you keep scrolling, you will eventually run into a “There are four lights!” meme.
These memes are, of course, a reference to one of the most famous episodes of Star Trek: The Next Generation. Entitled “Chain of Command,” it depicts the ordeal that Captain Picard must endure at the hands of a Cardassian interrogator named Gul Madred. It is one of the most famous (infamous?) episodes because of its brutal depiction of torture and humiliation, up to and including the truly shocking moment when Picard is hung naked by his wrists (thus cinematically immortalizing Patrick Stewart’s impressively muscular British arse). Despite the disturbing subject—or, perhaps, because of it—the episode has become one of the most beloved and acclaimed of the entire series.
I, for one, believe that “Chain of Command” deserves every iota of the praise it has received. It’s brilliantly acted, of course, by Stewart and his former Shakespearean colleague David Warner, who was one of the greatest actors of his generation. And it tackles a dreadful but important subject—the nature of political torture. Screenwriter Frank Abatemarco conducted research into the impact and nature of such torture as reported by Amnesty International, and the episode seems completely believable, not to mention chilling. It dissects the psychology of the victim but also of the torturer, with Warner brilliantly conveying how Madred, an intelligent man and, apparently, a loving father, is nonetheless able to rationalize his activities by dehumanizing his victim.
If one trawls the many Reddit threads and other chat-board threads that have been devoted to the episode, one learns that many of its fans—especially those former English majors, like myself—were quick to seize on its central homage to George Orwell’s 1984. Specifically, it echoes the climactic scenes in 1984 when Winston Smith is tortured by O’Brien, a man whom Winston believes to be a friend and fellow-revolutionary but who turns out to be a commander of the Thought Police.
As every Star Trek nerd knows, of course, the most direct parallel between 1984 and “Chain of Command” comes in the episode’s climax, when Madred shines four lights on the wall and asks Picard how many lights he sees. When Picard answers, truthfully, “four,” Madred shocks him.
In 1984, O’Brien lays Winston out on an electronic torture-rack and says to him,
“Do you remember,” he went on, “writing in your diary, ‘Freedom is the freedom to say that two plus two make four?”
“Yes,” said Winston.
O’Brien held up his left hand, its back toward Winston, with the thumb hidden and the four fingers extended.
“How many fingers am I holding up, Winston?”
“Four.”
“And if the Party says that it is not four but five—then how many?”
“Four.”
The word ended in a gasp of pain.
The torment continues, with Winston replying “five” and “three” and anything else he can think of to stop the pain. At which point O’Brien pauses the interrogation and says, “Sometimes, Winston. Sometimes they are five. Sometimes they are three. Sometimes they are all of them at once. You must try harder. It is not easy to become sane.”
Richard Burton as O’Brien in 1984
It’s this portion of 1984 that, to me, establishes O’Brien as the supreme villain of world literature. He is also its greatest nihilist. He seems to have no illusions about the purpose of Big Brother’s totalitarian rule—namely, for the rulers to partake of the ultimate sadistic pleasure in endlessly tormenting their subjects, forever. He blithely explains to Winston how the state will soon make things even worse for the common people, including modifying human anatomy so that people cannot even have orgasms. When O’Brien also suggests that the state might increase the pace of life so that people go senile at thirty, Winston pleads:
“Somehow you will fail. Something will defeat you. Life will defeat you.”
“We control life, Winston, at all its levels. You are imagining that there is something called human nature which will be outraged by what we do and will turn against us. But we create human nature. Men are infinitely malleable.”
Many dudes on Reddit have observed, correctly, that the scene with Picard and Madred are about mind control, and how strong people must fight to resist it. But the greater issue comes in the last scene of “Chain of Command,” after Picard has been freed and is safely back on the Enterprise. There, washed and fed, he meets with Counselor Troy and explains the worst part of his ordeal—namely that, in the delirium of his agony, he actually saw “five lights,” as he was commanded to do by Madred.
In other words, despite his great intellect and courage, Picard’s body began to alter his perceptions. He became, in O’Brien’s words, “infinitely malleable.”
In 1984, Winston experiences the same horrific revelation.
And he did see them, for a fleeting instant, before the scenery of his mind changed. He saw five fingers, and there was no deformity. Then everything was normal again, and the old fear, the hatred, and the bewilderment came crowding back again. But there had been a moment—he did not know how long, thirty seconds, perhaps—of luminous certainty, when each new suggestion of O’Brien’s had filled up a patch of emptiness and become absolute truth, and when two and two could have been three as easily as five, if that were what was needed. It had faded out before O’Brien had dropped his hand; but though he could not recapture it, he could remember it, as one remembers a vivid experience at some remote period of one’s life when one was in effect a different person.
I have written before about how the greatest themes in literature are best posed as questions. The question here is, “Is there really some indominable spirit in us that can’t be crushed and mastered by force and torture?” Or, put another way, “Are human beings really infinitely malleable, to the point that they can’t even trust their own senses?”
To many—and especially to those who adhere to a philosophy of materialism—this might seem a banal question. Their answer would certainly be: Of course, people are infinitely malleable; human beings are the product of their sensations, and if those sensations can be completely controlled (through drugs or torture or propaganda), then those beings can be complete controlled, too.
If this is true, I fear that the future of humanity is hopeless. We will, eventually, devolve into some kind of hive-mind existence (yes, rather like the Borg in Star Trek), which, even if it isn’t quite as hellish as the nightmare-state that O’Brien creates for the proles of 1984, would still be devoid of individuality or any authentic human experience.
Fortunately, I don’t believe it is true. For one thing, as a practical matter, I don’t believe that a ruling class whose only motivation is sadistic sexual pleasure could sustain itself. It’s too destructive, and its members would inevitably turn on each other. And even if they didn’t, they would die out, unable to create and nurture the most basic form of life—children. In other words, Big Brother can only destroy. It cannot create.
On a more philosophical note, I do believe that there is a “something called human nature,” as O’Brien puts it, that will inevitably rebel against tyranny. All the hero stories of world mythology reflect this, as do our own, modern mythologies. Like, for instance, Star Trek. Clearly, in the imagined universe of the twenty-fourth century, civilization has not devolved into some kind of soul-destroying dystopia. Quite the opposite. The Federation represents civilizations response to the ever-present threat of oppression, in all its forms, from fascist militarism (the Klingons), xenophobic isolationism (the Romulans), to full-on, cybernetic collectivism (the Borg). The Federation beats them all.
So, what is the Federation’s secret? Probably a lot of things. But, for my money, it’s that the Federation is a pluralistic society, open to all races, ideas, and voices.
Back in college, I studied the great Russian literary critic M. M. Bakhtin, who saw the greatest innovation in art as the novel. The novel represents a quantum leap in art because it is the greatest example of what Bakhtin called dialogism—the interplay of voices and perceptions from which our shared experience of consciousness emerges. This impulse toward dialogism—dialogue—is always set in opposition to the evil but omnipresent forces of monologism, which strive to establish a singular, monolithic truth on humanity and thus control it.
Big Brother’s IngSoc party might be the most monologic literary creation ever imagined by a writer (Orwell). Conversely, the Federation might be the most dialogic, combining not only an endless multiple of voices and point-of-view but actual sentient species from all over the galaxy, united by there shared…humanity? For lack of a better word, yes.
Let’s hope Star Trek’s vision of the future is the one that plays out.
Have you ever noticed that, at any given time, the tech bros and sci-fi nerds of the world are obsessed with one current, real-world technology. Right now, it’s AI. A few years ago, it was cryptocurrency. The topic itself changes over time, but whatever it is, they can’t stop talking about it.
I’m a bit of a nerd myself, but I must confess that I was never much intrigued by cryptocurrency, and I am only mildly interested in AI. Rather, my technological obsession is the same as it was when I was in high school: controlled fusion energy, a.k.a. fusion.
Fusion was a staple of almost every sci-fi book of the 1970s and ‘80s in which space travel or future civilization was described. Heck, even Star Trek’s U.S.S. Enterprise uses fusion to power its impulse engines. That’s why nerds of a certain age were so bewitched by the idea, and we still are.
But the idea itself isn’t science fiction—at least, not for much longer.
Fusion’s potential as the ultimate, clean power source has been understood since the 1940s. The required fuel is ubiquitous (basically water), the radioactive waste negligible (much lower volume and shorter-lived than fission waste), the risk of a meltdown non-existent (uncontrolled fusion reactions don’t ramp-up; they snuff-out), and the maximum power potential unlimited (fusion literally powers the stars).
The very idea of a world powered by clean, cheap fusion energy is enough to make a nerd’s eyes twinkle. (Well, this nerd, anyway.) No more oil wars. Fossil fuels would be worthless. We could use all the extra power for next-gen construction, manufacturing, water desalination, enhanced food production, and on and on and on. Best of all, we could start actively removing all the CO2 that we’ve been pumping into the earth’s atmosphere for 300 years.
Of course, a good bit of that power windfall will probably go to AI data centers, whose appetite for energy seems insatiable. And growing. Whatever your feelings are regarding the AI revolution, it is going to be one of the most important, disruptive, and consequential developments of human history, second only to the invention of the digital computer.
We’ll need fusion to power it.
So, I find it pleasingly ironic that AI might turn out to play a role in the mastery of fusion energy itself. I learned of this from an article on the World Economic Forum’s website, entitled “How AI will help get fusion from the lab to the grid by the 2030s”. To grasp the gist of the article, however, one should first understand how incredibly, maddeningly, ridiculously difficult controlled fusion is.
Fusion works by pressing atoms (of hydrogen, usually) together at enormous pressure—so enormous that it can overcome the mutual repulsion of these atoms and cause them to fuse and form a bigger atom (helium), while “sweating” a photon or two in the process.
This photon sweat is the bounty of the fusion energy, and it’s YUGE. Unfortunately, the process of squeezing a hydrogen plasma into a tight enough space for a long enough period of time at millions of degrees Celsius, without it leaking out the side or, worse, squirting off into the walls of your reactor and melting everything, is damned hard. You remember those prank spring snakes that pop out of a can when you open a lid? Imagine cramming a billion of those snakes into a can the size of a thimble and you’ll have some idea of the challenge.
Taming a fusion plasma is so hard, in fact, that it well be one of those hyper-intensive tasks mere human beings—with our leaden reflexes, sluggishly throwing switches and pushing buttons—might not be able to manage.
For an analogy, I often think of the F-117 Nighthawk, the first true “stealth” bomber produced back in the 1980s. The Nighthawk didn’t look like a regular airplane because it wasn’t a regular airplane. Rather, the distinctive, saw-tooth pattern of its wings and fuselage, which was the essence of its radar-evading design, made it look ungainly. And, indeed, it was ungainly, so much so that no human pilot could fly it unaided. Instead, an on-board computer was required to make constant corrections, microsecond by microsecond, to keep the plane in the air and on target.
Controlling a nuclear plasma is, I suspect, a lot like flying a stealth-bomber; constant corrections are needed to keep the fluid stable. And they need to happen much faster than a human being can comprehend, no less attend to.
Enter AI.
As we all should know by now, you can teach an AI how to do almost anything—including (we hope) how to maintain a fusion plasma. As the article I mentioned above explains, a partnership has been created between the private company Commonwealth Fusion Systems (CFS) and AI research company Google DeepMind to do just that. One of the more notable achievements of this partnership so far is the creation of a fusion plasma simulator called TORAX, which could be used to train an AI.
Of course, I have no idea if this partnership will turn out to be fruitful. For that matter, I have no idea if we will ever, truly, crack the fusion code once and for all. But I think we will. And I’m not alone. As one expert, Jean Paul Allain, states in the article, “Fusion is real, near and ready for coordinated action.” In other words, fusion might soon be a real thing. For this reason, capitalists have caught the fusion bug and are funding dozens, if not hundreds, of related start-ups, including CFS.
In some ways, this fusion mania is reminiscent of the very earliest days of aviation (way earlier than the Nighthawk). Back in 1908 or so, there were literally hundreds of amateur aviators in Europe, desperately trying to master the trick of powered flight. Many of these enthusiasts were smart, self-funded, and brave. But their craft were not much better than cannonballs with wings, unable to turn or steer, or even stay in the air for very long. Sure, they had all heard rumors of a possible breakthrough that might have been achieved by those bicycle-shop boys, the Wright brothers, over in the U.S., but no one knew exactly what had happened. And they certainly hadn’t seen the proof.
At an exhibition in Le Mans, France, Wilbur flew his and Orville’s latest model over the famous racecourse, remaining in-flight for a full one minute and 45 seconds. More important than the duration, though, was the fact that he could steer the airplane, demonstrating banked turns, climbs, and dives.
Three years later, he flew a newer model over the same racecourse for 31 minutes and 25 seconds.
The world had changed.
The same kind of progression is now happening in fusion. In 2024, Korea’s KSTAR tokamak sustained plasma for 102 seconds. In February of 2025, the WEST in France sustained a plasma for 22 minutes. Each year or so, the record gets longer, and the plasma becomes more stable. And all this is happening before the ITER mega-reactor has even come on-line (as it is expected to do this year).
One of these days, fusion is going to take off and never land.
(…or, Why I Leave My Christmas Lights Up Till January 5th)
Author’s Note: I wrote this post some years ago. I am reposting it now because Christmas.
I’m a big fan of the Brother Cadfael novels by Edith Pargeter. Brother Cadfael is a medieval monk who has two areas of expertise: botany (plant-based medicine) and solving crimes. Ever since I began reading the Cadfael series about twenty years ago, I’ve been fascinated by the richness and detail of Catholic dogma. Like all monks, Brother Cadfael observes the canonical hours that strictly divide the entire day into a schedule of prayer sessions (for which he is always late). And I also became interested in the various holy days that he and his fellow monks observe.
Perhaps that’s the reason I refuse to consider Christmas over on December 26. Rather, I prefer to stick to the original church concept of Christmastide, which begins on Christmas Day and extends all the way to Twelfth Night on January 5 (better known as Three Kings Day in the Latin community).
Twelfth Night, of course, marks the Day of the Epiphany when the Christ-child was perceived by the wise men as a divine being. The wise men are, themselves, a subject of fascination for me. Their story—which is barely mentioned in the bible—has become embellished over the centuries by various Catholic fanboys. According to current tradition, there were three of them, and they were in fact kings from various parts of the orient: Balthasar of Arabia, Melchior of Persia, and Caspar of India. In some versions, their various ages are given as 20, 40, and 60, representing the three phases of a person’s life (youth, middle-age, and old-age).
So, if you’re still hung over from Christmas Day (I’m speaking metaphorically, although I did have a bit of whiskey in my eggnog), take heart. Christmas might be over, but Christmastide goes on and on. As it should. Those medievals had a much better sense of how to celebrate, from which we, as harried, stressed-out, modern Westerners have much to learn.
And whether you’re a Christian or a lapsed-Christian or just a secular person who respects the Christ figure and observes the holiday solely from a sense of tradition, why not extend the holiday a bit, even past New Year’s Day? Leave your Christmas lights up. Give another present or two to your loved ones. Have another feast.
The brilliant singer and musician Chris Rea has passed away. I remember when I first heard this song (my favorite of his). I was in grad school in Tucson, Arizona and had a summer job teaching English at Pima Community College. I was driving there one day in July when the temperature hit 117. My little car started to overheat so I turned off the A/C, rolled the windows down, and hoped for the best. Then this song came on. It had such a great groove to it that I almost forgot my miserable circumstances. Almost.
It was also so very, very appropriate. Still is, even in winter.
For some reason, I read a lot of science fiction during the holidays. Maybe this is because I loved science fiction as a kid, and I had more time to read it during the Christmas break. At any rate, this year I decided to post a list of great literary science fiction novels.
I’m not qualified to give a meaningful explanation of the difference between popular and literary fiction. My old professor, Jonathan Penner has already done that in his fine essay “Literary Fiction Versus Popular Fiction” (which I cannot find on the Internets, alas). But suffice to say that popular fiction is defined by a formula. As Penner states, “Every work of literary fiction seeks to be like none other; every work of genre fiction seeks to be like many others. Genre fiction works for effects on which the reader knows he or she may rely. Literary fiction always tries to see the world freshly.”
The novels I list below certainly fall into the category of popular fiction. They all make use of the common tropes of science fiction stories: space travel, robots, aliens, and the end-of-the-world. So why do I call them literary? Because each one of them, while remaining on a generic level, is centered around a vivid and successfully realized characters. Also, each is written with a emphatic artfulness and grace.
So here’s the list:
The Left Hand of Darkness Strangely enough, I first got interested in Ursula K. Le Guin’s classic novel when I read a review of it by John Updike. It’s about a human envoy who is sent to a distant alien planet called Gethen. Gethen is remarkable not only for its isolation, but also because the locals, although descended from homo sapiens thousands of years of previously, are all hermaphrodites. Specifically, they function as men for certain part of their lives, and as women for the other. It sounds a bit hokey now (the plot was even ripped-off on an episode of Star Trek), but it’s a fine little novel, with some genuinely compelling drama. (It actually develops into a kind of wilderness adventure story when the main character gets entangled in the political intrigue of the Gethen government.)
Lord of Light Roger Zelazny was a hell of a good writer who is best known for his Chronicles of Amber series of fantasy novels. But my favorite is Lord of Light, which is the first novel I ever read that seems, at first, to be one genre (religious fantasy), but then reveals itself—convincingly—to be another (science fiction). Zelazny sets the story on an unnamed, earth-like planet where the gods of the Hindu pantheon (Brahma, Vishnu, Shiva, etc.) are real and very much alive. These dieties use their Godly powers to rule over a pre-literate culture of peasants (controlling them in ways that are often less-than-divine).
The story centers on Sam—who closely resembles Gautama Buddha but with a Marlowesque twist—who is trying to unseat the vain and profligate Gods from their position of power. Sam, we soon learn, is one of the original crew members of a starship called The Star of India, which crash-landed on the planet eons previously. Apparently, Sam and the other surviving officers used their technology to set themselves up as “gods” to rule over the other castaways, and eventually came to believe their own propaganda. It’s a far-out idea, which Zelazny delivers in a surprisingly subtle, vivid novel that is part epic, part spoof.
Virtual Light I think William Gibson is one of the best writers of his generation, literary or otherwise. He invented the term cyberpunk, and its attending genre, of which Virtual Light is my favorite example. It’s about a young bike messenger, Chevette, who spends her day cycling through the teeming, corporate-ruled streets of a future San Francisco. In a plot twist that is almost Dickensian in its lyricism, Chevette accidentally comes into possession of a very unusual pair of computerized sunglasses, which a villainous corporate hit-man is very anxious to get back. Gibson is a genius at mixing high-tech plots with low-tech heroes, and Chevette soon has to take refuge in an underground, DIY society based on the remnants of the Oakland Bay Bridge.
The Martian Chronicles Okay, it’s a stretch to put this one on the list, not because it isn’t literary, but because it really isn’t science fiction. It’s actually a collection of interlocking fables, all of which are based on Mars during the early phases of its colonization. Unfortunately, the planet is already occupied by an ancient race of Martians. If you substitute Illinois for Mars and the Sioux for the Martians, you get an idea of the feel of this novel, which Bradbury himself said was inspired by Sherwood Anderson’s classic collection, Winesburg, Ohio. Many of the stories are genuinely beautiful and haunting.
The Tripods If you didn’t read Samuel Youd’s trilogy when you were a kid, you should do so now, especially if you have a kid of your own. Earth has been conquered by aliens, and what remains of humanity exists in a servile, pre-industrial society. Young Will and his cousin Fritz escape their village and go on a quest for the fabled “White Mountains” where a human resistance is forming.
A Clockwork Orange The movie has outshone the book for most of my lifetime. But the book is actually better, written in the inimitable voice of Alex, a fourteen-year-old sociopath who is just trying to have a good time in an Orwellian England of the near future.
God Emperor of Dune The original Dune is a really cool adventure story. I always thought of it as Lawrence of Arabia in the 30th Century. But I think Herbert’s fourth novel in the series, God Emperor of Dune, is the best, from both a narrative and stylistic point-of-view. It focuses on just a couple of characters (as opposed to dozens) and it presents Leto Atreides (son of Paul) as genuinely sympathetic character.
Nova Samuel R. Delany could never decide whether he was a physicist or a mythologist or an literary fiction writer. Why limit yourself? Nova, the story of a mad space-captain seeking to fly through the center of an exploding star, is Delany’s tightest and most interesting novel.
Childhood’s End I hesitate to put this one on the list. I loved it as a kid, as I loved all of Arthur C. Clarke’s books. As a writer, he was very, very limited. But in Childhood’s End, he really outdid himself. It’s the first novel I ever read that broached the concept of a technological Singularity.
Do Androids Dream of Electric Sheep I can’t help but think that, in some alternate universe much like those he described in his novels, Phillip K. Dick is alive and well and living as a respected and beloved literary fiction writer. In this universe, however, he wrote mesmerizing, almost hallucinogenic sci-fi novels about good and evil, the definition of “reality”, and what it means to human. His best book, Do Androids Dream of Electric Sheep, is a haunting classic.
Author’s Note: This post first appeared some years ago on my old blog, Bakhtin’s Cigarettes.
Well, it’s another day of the week ending Y, so it’s time for another Shameless Plug! This one is not even original, alas. Even so, it’s pretty cool that you can listen to the first three chapters of my novel, Twice the Trouble, totally free on Youtube! And it’s legal, even! (I think.)
I had never heard of Young the Giant until a few months ago, when this song, “My Body,” popped up on the playlist at my gym. I liked the groove so much that I paused my incredibly wimpy set of curls (“Hey, I’m going for tone, dammit. TONE!”), went to my locker, got out my phone, and Googled it.
The rest is history…
People have compared Young The Giant to The Cure, but they remind me more of Coldplay, but with more of a rock-edge (not to mention a bit more soul).
As part of the work for the Read a Classic Novel…Together channel, I’ve been reading a very old classic indeed, The Life and Opinions of Tristram Shandy, Gentleman (almost always abbreviated to just Tristram Shandy). Published by Laurence Sterne in 1767, it’s a very funny, sly book—kind of like Curb Your Enthusiasm, but in the 18th Century—and I found myself imagining George Washington reading it (and laughing his ass off) while holed up in some winter bunker during the dark days of the American Revolutionary War.
The story is told by Shandy himself, a very smart but self-absorbed and neurotic man (again, a lot like Larry David) who tells his life story in wry, sardonic, and occasionally schizophrenic prose. At one point in his rambling narrative, he mentions the “seven planets” in the heavens, which surprised me. Sure, I knew that even the ancient Romans knew about the inner planets, as well as Jupiter and Saturn. But the fact that the unfathomably distant Uranus was known in the 18th Century struck me as remarkable.
As it turns out, I was wrong. Uranus was not discovered until 1781, over a decade after Sterne wrote his novel (but still much earlier than I thought). Which means that there is no way that either Sterne, the writer, nor Shandy, the character, could have known about the actual seventh planet.
So, what, exactly, is Shandy alluding to in the “seven planets” bit? It seems that he was referring to the astrological planets in their classical (and very unscientific) sense, i.e., the Moon, Mercury, Venus, the Sun, Mars, Jupiter, and Saturn. This context makes sense, in retrospect, because Shandy is obsessed with astrology and its supposed effect on a person’s character. (Yes, he’s a bit of a kook.)
Lawrence Sterne
By the time I sorted all this out, however, it was too late; I was already deep down the Wikipedia rabbit-hole. I looked up the history of the outermost planet (not to be confused with the dwarf planets like Pluto and Ceres), Neptune. Neptune was discovered by French astronomer Urbain Le Verrier in 1846. Not only was it the first planet to be discovered entirely by telescope, it was also the first one whose existence was surmised before it was actually observed. That is, Verrier and fellow astronomer John Couch Adams had noticed irregularities in Uranus’s orbit, which, they suspected, might be caused by a seventh, hitherto unseen planet. They then deduced the probable location of this hypothetical planet and hunted it down.
And that’s not all! I also learned that Neptune was actually discovered before it was discovered. As historians found later, Neptune was seen at least three times before, by Galileo Galilei in 1613, Jérôme Lalande in 1795, and John Herschel in 1830. Each of these men recorded seeing something in that spot, but none of them realized it was a planet and not just a weird “fixed” star.
Apparently, this sort of thing happens with some frequency in the world of science, to the point that it actually has a name: precovery. In a precovery, someone finds all the information they might need to make a real (and potentially career-making) discovery, but they never put the pieces together. (Or, at least, they never publish a paper about it.)
I think there might be a profound lesson here about the difference between data and knowledge, observation and understanding.
It’s also a lesson about publishing what you’ve got, ASAP. Before some other doofus steals your thunder.
I find it interesting that the two most famous architects in American history—famous, that is, among ordinary people who don’t subscribe to Architectural Digest—were both named Frank. They were, of course, Frank Lloyd Wright and Frank Gehry. Both men created buildings that captured the popular imagination like few others. And both were mavericks whose vision of what architecture could do often offended the mavens of the status quo (not to mention the bean-counters who worked for the rich people who funded their projects).
Both men also shared a sense of play and in their work—Gehry to a much greater degree, sometimes designing homes and offices and other buildings that veered into pure fantasy. He often brainstormed new projects with strips of paper and cardboard, envisioning light, fluid, soaring structures that, one could argue, would not have been possible to actually build in an age before computer-assisted design was available.
This emphasis on play and the power of imagination was evident in all his work, even in huge, civic projects like his Walt Disney Concert Hall in Los Angeles. As Paul Goldberger relates in his fine biography, Building Art: The Life and Work of Frank Gehry, Gehry took an almost impish pleasure in fooling around with his own designs. When he was in the early stages of mocking-up his plans for another auditorium in Asia, he shocked and amazed his colleagues by adjusting the position for the auditorium every night or so. As one friend put it,
…[H]e became a monster. He started moving stuff around.… We were doing a project in Korea that never got built [the museum for Samsung] but every time I went on a trip and came back he had moved the auditorium. He was impeccable. He had incredible reasons for it. He’s really brilliant. He doesn’t sleep at night and he comes back the next morning and moves the auditorium.”
As Goldberger explains…
Moving the auditorium, in Frank’s view, was a form of what he liked to call “play,” and it was largely instinctive. “A serious CEO, you would imagine, does not think of creative spirit as play. And yet it is,” he said. “Creativity, the way I characterize it, is that you’re searching for something. You have a goal. You’re not sure where it’s going. So when I meet with my people and start thinking and making models and stuff, it is like play.”
As the title of Goldberger’s book relates, Gehry saw himself almost as more an artist than an architect. At times, he refused to believe that one needed, necessarily, to make a distinction between the two. Early in his career, Gehry befriended and hung-out with great modern artists in Southern California, and they reciprocated his admiration. Perhaps this is the reason that Gehry’s greatest buildings resemble art more than perhaps other architect.
Guggenheim Museum, Bilbao
His most famous is, of course, the Guggenheim Museum in Bilbao, Spain. When the museum finally opened in 1987, a flood of tourists came from all over the world to see it, prompting some of the artists whose work was displayed inside the museum to feel that they were playing second fiddle to the building itself. This grumbling grew into a modest backlash among the artistic community, focused not so much on Gehry himself as on the fawning admiration of journalists and other architects who often lauded Gehry as an “artist.” As Gehry’s own collaborator and friend, the sculptor Richard Serra, said,
I don’t believe Frank is an artist. I don’t believe Rem Koolhaas is an artist. Sure, there are comparable overlaps in the language between sculpture and architecture, between painting and architecture. There are overlaps between all kinds of human activities. But there are also differences that have gone on for centuries.”
Whether he was being lauded or criticized, Gehry himself never seemed concerned. In fact, when compared to that other great architect named Frank, Gehry usually seemed downright humble, if not pathologically shy. Goldberger writes:
Even though Gehry was ridden with angst throughout his life, his manner came off as relaxed, low-key, and amiable, and his steely determination, far from being obvious like Wright’s, was hidden behind an easygoing exterior, a kind of “aw shucks” air that Gehry’s old friend the artist Peter Alexander called “his gentle, humble ways.” Wright was never mistaken for being modest; Gehry often was.
Gehry was so shy, in fact, that I feel he could have been much more famous than he was if he gotten himself out there, gone on TV more and granted more interviews and written some puff-pieces for various magazines and web sites. The fact that he did not is, I suppose, the most telling fact about the man’s character. Namely, that he was a genius who was determined to create the most original and uplifting works as he could…and, then, to let those works speak for themselves.