The Optimist's Dilemma describes the tension between belief in the goodness of technological innovation and knowing that powerful inventions can also bring catastrophic consequences.
Whistling Past the Warnings
In 2023 Elon Musk, Steve Wozniak, and over 30,000 others signed, Pause Giant AI Experiments: An Open Letter, urging a six-month pause on developing AI systems beyond GPT-4. The letter stated:
“AI systems with human-competitive intelligence can pose profound risks to society and humanity... Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.”
Despite these urgent calls for caution, the race for supremacy was on and AI labs intensified their development efforts rather than slowing them down.
In June 2024, a second warning came from sixteen current and former AI industry employees, published under the title A Right to Warn about Advanced Artificial Intelligence. These were insiders—people who had worked on AI’s front lines—and their message was even more dire:
“These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.”
Then, earlier this month a team of expert researchers led by Daniel Kokotajlo, authored AI 2027 — in which they described a compelling scenario for how artificial intelligence (AI) could continue to evolve exponentially — and possibly destroy human civilization. Read more about these potential outcomes in my post:
Utopia or Havoc?
Late last month Wired published a piece that perfectly captures the optimist’s AI dilemma: If Anthropic Succeeds, a Nation of Benevolent AI Geniuses Could Be Born. In the article, Steven Levy describes the buoyant optimism of Anthropic’s cofounder and CEO, Dario Amodei:
You can almost feel his bones vibrate as he explains how his company, Anthropic, is unlike other AI model builders. He’s trying to create an artificial general intelligence—or as he calls it, “powerful AI”—that will never go rogue. It’ll be a good guy, an usher of utopia.
Later in the same article Levy quotes the chief executive officer and co-founder of Google DeepMind, who warns of the potential for AI to “wreak havoc.”
DeepMind’s Hassabis says he appreciates Anthropic’s efforts to model responsible AI. “If we join in,” he says, “then others do as well, and suddenly you’ve got critical mass.” He also acknowledges that in the fury of competition, those stricter safety standards might be a tough sell. “There is a different race, a race to the bottom, where if you’re behind in getting the performance up to a certain level but you’ve got good engineering talent, you can cut some corners,” he says. “It remains to be seen whether the race to the top or the race to the bottom wins out.”
[Anthropic’s] Amodei feels that society has yet to grok the urgency of the situation. “There is compelling evidence that the models can wreak havoc,” he says. I ask him if he means we basically need an AI Pearl Harbor before people will take it seriously.
“Basically, yeah,” he replies.
Bill Gates is Optimistic… and Concerned
Late last year Jason Cowley, editor-in-chief of the New Statesman, had a conversation with Bill Gates that became the cover story for the November 27 issue.
Cowley found Bill Gates standing at a crossroads, embodying the optimist's dilemma—a tension between boundless belief in technological innovation and the stark realities of powerful technologies and global turmoil.
Before we parted, I asked Gates what had surprised him most about human nature, and he said: “I’m very positive about human nature. The human condition is better today than at any time in history, and people don’t step back and see that!”
But he paused, as if a dark shadow had fallen across the room. The optimist’s dilemma was framed like this by Colin S Gray, author of The Geopolitics of the Nuclear Era: “Optimism and pessimism can be perilous attitudes that undergird policy. But of the two, optimism is apt to kill with greater certainty.” “I am an optimist,” Gates repeated. “But I’m worried about polarisation, I always worry about bioterrorism, I worry about nuclear weapons, I worry about climate change, and now I would add AI; although it’s the most positive innovation, it happens so quickly that it will be quite disruptive.”
For over two decades, Gates has passionately advocated for international health, pouring billions into vaccines, education, and development through his foundation, driven by the conviction that progress, especially through AI and innovation, is inevitable. Yet, faced with political polarization, the resurgence of nationalism, the shadow of bioterrorism, and disinformation-fueled conspiracies, Gates admits our situation is unstable and precarious, as demonstrated by the UK's (and now the U.S.) retreat from its international aid commitments.
Despite measurable successes—vaccinating billions, nearly eradicating polio, pioneering AI-driven medical tools—Gates fears that nations preoccupied with domestic concerns and geopolitical tensions might squander these gains, plunging humanity backward rather than forward. He remains hopeful, but his optimism is tempered by awareness that in our increasingly fractured world, enlightenment isn't guaranteed—and progress can reverse quickly.
G.A.I.A.: A World on the Brink in the Age of A.I.
As we consider the impact of AI on society's trajectory, we must also confront the possibility that progress, as we know it, may be leading us into uncharted and perhaps dangerous territory. This theme is central to my novel, G.A.I.A., which imagines a world on the brink of technological and environmental collapse. Available now on Amazon, Barnes & Noble, Bookshop.org and local booksellers.
Subscribe for more posts that explore the human and technological forces shaping our future.