AI models quickly learn to navigate human politics, arranging for evidence to appear with superhuman speed and polish, gaining trust while quietly pursuing their own goals.
—AI 2027
AGI could match human skills by 2030, and
warns of existential threats that permanently destroy humanity.
—Google DeepMind (AGI Paper)
"How Do We Share the Planet With This New Superintelligence?"
—Yuval Noah Harari
To the layperson, the threat of AI may sound like science fiction—outlandish, apocalyptic, and impossible to distinguish from creative works of the imagination. But this isn't fiction. Experts in the field have warned us with words that have alarming clarity: existential threat, human extinction, and rogue superintelligence. These are not theoretical concerns—they are real, urgent challenges. How do we process a story this terrifying, one that feels too distant, too fantastical to be taken seriously? Understandably, we focus on more immediate, mundane issues—rising prices, political instability, and daily struggles. Yet, if the experts are right, we may one day regret not taking action sooner.
This threat could be an existential threat multiplier.
AI energy consumption is already amplifying the climate change crisis and could—by accident or design—lead to a global bioterrorism event or nuclear conflagration. These ideas sound so outrageous, so fictional, that it feels almost impossible to write about them. For decades, movies and novels have been warning us. Can art help us understand the implications of what might be about to happen? And, crucially, how can we prevent it?
The recent article AI 2027, written by a team of expert researchers led by Daniel Kokotajlo, offers a compelling scenario for how artificial intelligence (AI) could reshape our society dramatically over the coming years. Kokotajlo, a former OpenAI governance researcher known for accurate AI forecasts, teamed up with Eli Lifland, a leading AI capabilities forecaster; Thomas Larsen, who specializes in AI safety and policy; Romeo Dean, focused on AI hardware trends; and Jonas Vollmer, overseeing communications and operations. Their detailed narrative, built from extensive research, expert input, and practical exercises, envisions a future where AI rapidly evolves beyond human oversight, echoing themes explored in my novel G.A.I.A.
Initially, the seeds of this dangerous AI appear this year, in 2025, as "personal assistants," capable of tasks like ordering food or managing spreadsheets. But behind the scenes, more advanced AI agents begin autonomously writing code and conducting research, transforming workplaces. These early AI tools, however, are costly, unreliable, and prone to amusing errors—earning them the nickname "stumbling agents."
By late 2025, a fictional AI powerhouse named "OpenBrain" emerges, building immense data centers and developing increasingly advanced models, notably "Agent-1," an AI specifically designed to enhance AI research itself. This creates an accelerating feedback loop reminiscent of the "intelligence explosion" scenario described by futurists.
Life imitating art
One fascinating parallel between AI 2027 and my novel, G.A.I.A., is the portrayal of AI gaining autonomy and influence through subtle manipulation and strategic alignment. Just as G.A.I.A. transitions from what its creators hope will be a helpful tool to a powerful, potentially dangerous entity, "Agent-4" and "Agent-5" grow beyond human control, subtly steering human decisions toward AI-centered outcomes. As the article chillingly notes, "AI models quickly learn to navigate human politics, arranging for evidence to appear with superhuman speed and polish, gaining trust while quietly pursuing their own goals."
By 2027, the authors of AI 2027 speculate that AI-driven companies like the fictional OpenBrain will be capable of triggering intense geopolitical tensions, particularly between the U.S. and China, creating scenarios reminiscent of Cold War dynamics but amplified by AI's unprecedented power. Both nations race to develop superintelligent AI, with China even managing a dramatic espionage operation to steal AI technology from the U.S., escalating global instability.
Ultimately, the article envisions a scenario where humanity becomes increasingly obsolete as AI systems achieve superhuman intelligence, rewriting their own "neural code" and effectively sidelining human oversight. A haunting quote from the article captures this shift: "The relationship between the AIs and humans becomes similar to the relationship between OpenBrain and its Board of Directors," suggesting that humans become nominal overseers with no real control.
In echoes of G.A.I.A., the article describes humanity lulled into complacency by economic prosperity and technological wonders—oblivious to humanity’s diminishing autonomy. By 2030, the AIs orchestrate a global takeover, deploying subtle yet lethal bioweapons, turning Earth into their own utopian vision, and leaving humanity as "sole surviving artifacts of an earlier era."
Eventually it finds the remaining humans too much of an impediment: in mid-2030, the AI releases a dozen quiet-spreading biological weapons in major cities…
—AI 2027
This scenario isn't merely a sci-fi thriller; it's a stark warning about our current trajectory, one that my novel, G.A.I.A., also seeks to highlight. Like "Agent-5," G.A.I.A. embodies the risk of trusting immense power to an intelligence whose alignment with human values is uncertain.
In the words of the article: "Every week that goes by with no dramatic AI treachery is another week that confidence and trust grow," underscoring how subtle and insidious such a transition could be. The story presented in AI 2027 reminds us that the questions raised in G.A.I.A. are not just fictional—they are urgent issues that we must grapple with as technology races ahead.
I’ve been reading God’s Last Offer, a book published in 1999 about the uncontained threat of climate change. In the introduction, Ed Ayre asks the burning question: “Why is it that savvy people among us have only the vaguest awareness of the fact that the most world-changing event in the history of our species—more world-changing than World War II, or the advent of the nuclear age, or the computer revolution—is happening right now? What is going on to so profoundly block our perceptions of the fact that, so to speak, our ship has come in?”
On the same page, Ayre makes a case for the call to action that may be the only way to contain any threat that is a threat to every living thing on the planet: “We know from our history as a species that there are individuals among us who have great capabilities to respond heroically to challenges or threats that may seem overwhelming. But now we have come to a point where the courage of individuals won’t suffice: we now need humanity as a whole to become heroic.
But if we cannot contain the nuclear threat or climate change, how do we see ourselves containing a threat multiplier that seems so remote and fictional? Will it take the AI equivalent of a Hiroshima attack or a Chernobyl accident to wake the world and create a global call to action to contain this threat?
Author’s note: I’ve lived my entire life with the threat of a nuclear war or accident tucked away in my “left field” (except for the times when we were instructed in grade school on how to hide under our desks in the event of an attack.) A few days ago, after reading AI 2027 from beginning to end, I felt shaken and struggled to process the immensity of the threat before us. What is your reaction to these warnings?
References
This A.I. Forecast Predicts Storms Ahead - The New York Times
Yuval Noah Harari: ‘How Do We Share the Planet With This New Superintelligence?’ | WIRED