Icarus to AI: Will Humanity Fly Too High?

Yuval Harari’s ‘Nexus’ asks if we should restrict the evolution of artificial intelligence, given that it cannot be easily reversed.

“Is AI humankind’s most significant invention, or our last one?”

That is the tagline that the publisher has selected to promote Yuval Harari’s new book, Nexus.  The subtitle — A Brief History of Information Networks from the Stone Age to AI — is misplaced, since the book is not remotely brief, and is not really historical. The approach Harari takes, much like his previous popular work Sapiens, assembles a dizzying collection of facts, or claims that might be facts, connected by an engaging narrative and a provocative conclusion.

If you like that kind of book — I thought Sapiens was great the first time I read it, but liked it much less after I assigned it for class and read it carefully — then Nexus might be worth a look. But there seems to be a trend in non-fiction books lately, drawing on “history” to authenticate the author’s pet ideological project, and at the same time sell a lot of books because the thesis is surprising. There are some egregious overreaches in the genre: Nancy McLean took James Buchanan’s (uncontroversial) claim that constitutions must limit the domain of democracy, and created a Bond-villain scenario where Public Choice was a plan for world domination. Matthew Desmond wrote a best-selling book that comes very close to saying that poverty in the US happens because elites like poverty, ignoring the fact that there is less poverty in the US than almost anywhere else on Earth, and less than at any other time in human history.

Even the examples of the well-executed “big think” books can be a little tiresome, though; Kerry Howley’s Bottom’s Up,  or Jonathan Haidt’s The Anxious Generation, are useful but hyperbolic, straining to fit everything into the book’s premise. In the case of Nexus, Harari focuses on artificial intelligence as if it were an identifiable or definable thing, when it is actually a name for a whole suite of inventions and developments that are more incremental than they are revolutionary. Harari is going one better, however, because this unbrief book is extending the primal human fears of the unknown, drawing on Icarus and Frankenstein to scare us about ChatGPT.

Icarus, Frankenstein, and Alfred Nobel

I don’t mean literally, of course, since Icarus and Frankenstein appear nowhere in the Nexus narrative. What I mean is the fear of the unknown married with our imagination of the unimaginable, the power of technology we do not understand. As Arthur C. Clarke famously noted, “Any sufficiently advanced technology is indistinguishable from magic. ”

Ovid tells the tale of Icarus, in Metamorphoses. As the story goes:

[Daedulus, the inventor who made the wings] gave a never to be repeated kiss to his son [Icarus], and lifting upwards on his wings, flew ahead, anxious for his companion, like a bird, leading her fledglings out of a nest above, into the empty air. He urged the boy to follow, and showed him the dangerous art of flying, moving his own wings, and then looking back at his son…

[T]he boy began to delight in his daring flight, and abandoning his guide, drawn by desire for the heavens, soared higher. His nearness to the devouring sun softened the fragrant wax that held the wings: and the wax melted: he flailed with bare arms, but losing his oar-like wings, could not ride the air. Even as his mouth was crying his father’s name, it vanished into the dark blue sea… The unhappy father, now no longer a father, shouted “Icarus, Icarus where are you? Which way should I be looking, to see you?” Then he caught sight of the feathers on the waves, and cursed his inventions. (emphasis added)

The real threat is inventions that achieve self-awareness, and then curse their human creators. On the last page of Frankenstein, Mary Wollstonecraft Shelley gives us a final dialogue between Captain Walton and the “being” (monster), after Victor (spoiler alert) has been killed by the creature:

“Wretch!” I said. “It is well that you come here to whine over the desolation that you have made. You throw a torch into a pile of buildings, and when they are consumed, you sit among the ruins and lament the fall….”

“Oh, it is not thus — not thus,” interrupted the being. “Yet such must be the impression conveyed to you by what appears to be the purport of my actions. Yet I seek not a fellow feeling in my misery. No sympathy may I ever find. When I first sought it, it was the love of virtue, the feelings of happiness and affection with which my whole being overflowed, that I wished to be participated…Once I falsely hoped to meet with beings who, pardoning my outward form, would love me for the excellent qualities which I was capable of unfolding. I was nourished with high thoughts of honour and devotion. But now crime has degraded me beneath the meanest animal. No guilt, no mischief, no malignity, no misery, can be found comparable to mine…”

“It is true that I am a wretch. I have murdered the lovely and the helpless; I have strangled the innocent as they slept and grasped to death his throat who never injured me or any other living thing. I have devoted my creator, the select specimen of all that is worthy of love and admiration among men, to misery; I have pursued him even to that irremediable ruin.”

A number of thinkers have connected artificial intelligence and “The Modern Prometheus,” Shelley’s subtitle for Frankenstein. But the notion of “technology” is often vague, and applied simply to progress or change. A more specific concern with invention, one that led to a change in the way we think about science itself, was the reaction of Alfred Nobel to having made substantial and substantive innovations in a variety of ways to kill lots of human beings.

Evan Andrews (History.com, 2020) describes the irony of having Alfred Nobel be responsible for a “Peace Prize.” He quotes historian Oscar J. Falnes, who said that Nobel’s family name was “associated not with the arts of peace but with the arts of war.” Nobel’s father Immanuel was also an inventor and engineer, responsible for weapons factories, and who built highly functional naval mines for Russia during the Crimean War. Son Alfred Nobel had more than 350 patents, including nitroglycerin detonators, blasting caps, a smokeless gunpowder called ballistite, and — most famously, 1867 — dynamite, which dramatically improved the field safety and usability of explosives in both warfare and construction.

What was it that led the inventor of dynamite and other weapons of mass destruction to create a legacy of honoring peace? There are several theories, but one particularly interesting thing to note is that Nobel had the chance to read his own obituary, and he found the experience deeply disturbing. Of course, reading one’s own obituary would be unsettling for any of us, but when Alfred’s brother Ludvig had died in France in 1888 of a heart attack, newspapers appear to have been confused. A French newspaper called Alfred Nobel a “merchant of death” who had enriched himself by imposing new tools to “mutilate and kill” on the world (cited in Andrews, 2020).

It’s the flip side of the movie, It’s a Wonderful Life, where George Bailey is allowed to see how the world would have been without him. Nobel got an unwelcome insight into the way the world had changed because he had lived, and it made him consider his achievements a bit differently.

In his biography of Nobel, Kenne Fant claimed that the inventor “became so obsessed with the posthumous reputation that he rewrote his last will, bequeathing most of his fortune to a cause upon which no future obituary writer would be able to cast aspersions.” The will said that the “Peace Prize” — to be awarded in Norway, unlike the other Nobel prizes, which are awarded in Sweden — should be given “to the person who shall have done the most or the best work for fraternity between nations, for the abolition or reduction of standing armies and for the holding and promotion of peace congresses.”

Is AI Frankenstein’s Dynamite?

Harari appears to have concluded that dynamite, nuclear weapons, and…well, everything, pales in significance compared to the existential threat of artificial intelligence. In fairness, I should note that I have been skeptical of the apocalyptic claims made about AI, but Harari’s position is hardly out on the fringe. (Members of my own family, in fact, have some very strong doubts!)

As several reviewers have noted (notably the Times), Nexus is really two books. The first is a “history” of information networks and communication. There is, of course, a lot to say about this, and Harari appears intent on saying his version of all of it. There is no filter, or sense of priority, and one quickly finds oneself turning pages. Unless you find the history of communication and information storage inherently interesting, though, the first 190 pages of the book are not the value proposition here.

The action starts with the chapter entitled “The New Members: How Computers are Different from Printing Presses.” This is certainly an interesting point (again, fairness requires that I note I have claimed computers are in some important ways “like” the revolution of printing), because, in Harari’s view at least, AI marks the first time that technology has stopped being a means to do what humans want, and begun a process where the technology is doing what the technology itself “wants.” The dangerous technology to date has all been a means of accomplishing an end set out by humans: killing other humans. But the technology could not decide whether, and whom, to kill; it could only make humans better and better at behaving worse and worse. As Harari puts it: “Little Boy — the bomb dropped on Hiroshima — exploded with a force of 12,500 tons of TNT, but when it came to brainpower, Little Boy was a dud. It couldn’t decide anything.”

The difference, for Harari, is that the printing press produced what humans wanted. That might be bad, perhaps hate-filled tracts or misleading lies, but at least there was a human making the choices. AI, and particularly “the algorithms” (Harari’s repeated description) of Facebook, TikTok, and other social media platforms, are making their own choices. AI, then, is more like an editor deciding what will be promoted, and what will be de-emphasized, on a page. These choices are self-reinforcing: more people watch disturbing or provocative content, and so the algorithms privilege that content, and then people make more of that content, because that is the way to get views. Data is the new oil, and we are the dinosaurs whose decomposition will create the value.

What Harari calls “the main takeaway of this book” is the claim that the emergence (and he uses that word advisedly) of computers that can pursue goals and make decisions autonomously “changes the fundamental structure of our information network.” He gives the example of an AI that was given the task of “solving” a CAPTCHA, or Turing test to prove that the web site was interacting with a human, not a computer. The AI was unable to do this task, but on its own the machine contacted TaskRabbit, a website where humans can be hired to carry out tasks. The AI pretended to be human, passing the Turing test for the human contractor, and the contractor got the AI past the CAPTCHA obstacle. No one had told the computer to cheat, but neither had the parameters of the task been defined clearly enough to rule out this tactic. One can imagine Victor Frankenstein’s “entity” making just such a choice, and then being bewildered when humans are enraged. 

Harari claims that the best solution is to restrict AI and the use of independent “intelligence” by international treaties, recognizing that such treaties will be hard to enforce. He points out the old modalities — cold war and hot war — are likely now to give way to “code war,” extremely dangerous and even catastrophic aggressions that may not look like aggression at all.

Harari also makes a point that others have made, and which is likely correct: we have come to act more and more as if the expansion and use of AI is simply inevitable, a fact of life to which society must learn to adapt. At a minimum, it seems that it would be useful to pause and reflect on whether this is a path on which we wish to continue, and with what guardrails. There have certainly been other calls for reflection, and pause, and (once again in my own family) bans on platforms such as TikTok. I doubt that Nexus will have the impact that Sapiens had, but the new book has certainly attracted some attention to its central claim, that AI represents a set of transformations that, once allowed, cannot easily be reversed.



Post on Facebook


Post on X


Print Article