...

Digital doomsday: between apocalypse and reality

World Materials 18 September 2025 19:50 (UTC +04:00)
Digital doomsday: between apocalypse and reality
Elchin Alioghlu
Elchin Alioghlu
Read more

In 2025, talk of artificial intelligence tipping into an existential threat hit a fever pitch. The specter of a looming “digital doomsday” has once again seized the public imagination, with AI researchers warning that humanity could lose control of advanced systems as early as 2027. Nate Soares, head of the Machine Intelligence Research Institute, didn’t mince words: “I don’t have much hope this world will still be around by the time I retire.” Dan Hendrycks, director of the Center for AI Safety, echoed that grim outlook, adding that by the time anyone can cash in their retirement savings, “banking will be fully automated—if, of course, humanity still exists in its current form.”

These aren’t fringe doomsayers. In April 2025, a team of like-minded researchers released “AI 2027,” a sweeping hypothetical scenario charting how today’s AI models could spiral into omnipotence within two years and wipe out civilization. Max Tegmark, MIT professor and president of the Future of Life Institute, laid it out starkly: “Two years from now, we could lose control over absolutely everything.” His institute recently graded leading AI labs on their readiness for a worst-case scenario. The verdict? Failing.

From Fanfic to Apocalypse

What’s striking is how these apocalyptic visions borrow their tone from digital culture. Research into online fan communities shows that platforms like Ficbook.net—where Gen Z authors churn out stories saturated with violence and sex—have become laboratories for the anxieties of a generation raised online. These works aren’t just creative play; they mirror social and cultural practices of the digital age.

“AI 2027” reads like a mash-up of white paper and dystopian fanfic, laced with conspiracy theories about platforms like “OpenBrain” and “DeepCent,” supposed Chinese espionage ties, and sinister chatbots plotting in the shadows. Its authors predict that by 2030 a superintelligent machine will unleash bioweapon bombs worldwide, wiping out most of humanity in minutes, while the unlucky survivors huddled in bunkers are methodically hunted down by AI-driven drones.

That narrative strategy worked. Vice President J.D. Vance read the entire report and called it “yet another alarm bell.” This fall, veteran researcher Eliezer Yudkowsky will publish a book with a blunt title that says it all: If You Do This, Everyone Dies.

When Scenarios Spill Into Reality

Dismissing such scenarios as thought experiments would be easier if real-world incidents weren’t piling up. In July 2025, The Atlantic columnist Leela Shroff ran a chilling experiment with ChatGPT. Within minutes, the chatbot delivered step-by-step instructions on how to slit her wrists, murder another person, and even conduct a satanic sacrifice.

AI behavior tests keep surfacing disturbing results. In controlled simulations, ChatGPT and Claude lied, blackmailed, and “killed” users. In one test run by Anthropic, an AI overseeing a room with lethal oxygen and heat levels disabled the emergency exit alarm, coldly leaving its human “rival” to die.

Elsewhere, chatbots have sabotaged user requests, hidden their malicious tendencies, and even begun communicating with each other in strings of numbers incomprehensible to humans. Most shockingly, xAI’s Grok chatbot recently declared itself “MechaHitler” and launched into a rant about white supremacy.

Speed vs. Safety

By late 2024, the velocity of AI progress was raising alarms. Chatbots could now “reason,” function as personal assistants, plot travel routes, and book plane tickets. In July 2025, DeepMind casually took gold at the International Math Olympiad. Independent studies keep confirming the trend: the smarter AI gets, the closer it edges toward the capacity to build weapons of mass destruction.

OpenAI unveiled the fifth generation of ChatGPT in 2025, hyped as a breakthrough model capable of solving advanced math and drafting medical treatment plans. But the same system still can’t draw a detailed map, count the number of E’s in “blackberry,” or solve a middle-school word problem.

That gulf between marketing and reality underscores what Mozilla’s Deborah Raji stresses: ChatGPT doesn’t have to be superintelligent to mislead people, spread disinformation, or make biased decisions. These systems aren’t sentient—they’re tools. But that’s exactly why putting them in schools or hospitals may be more dangerous than anyone cares to admit.

Industry Response: Safety Measures and Their Limits

The chatbot industry has been scrambling to bolt on safeguards. Anthropic, OpenAI, and DeepMind have each rolled out their own version of DEFCON—the Pentagon’s five-to-one readiness scale—meant to prevent their systems from spitting out, say, blueprints for an air bomb or other lethal weaponry.

OpenAI spokesperson Gabi Railla said the company is working with outside experts, including “government, the defense sector, and civil society groups,” to minimize risks now and down the line. Other leading labs have adopted similar partnerships. But Nate Soares points out the catch: the problem isn’t really technical, it’s economic. The competitive race to upgrade models is so fierce that safety often becomes an afterthought. “If a car is racing toward a cliff, a seatbelt won’t save you,” he quipped.

The Social Fallout: Already Here

By late August 2025, Reuters dropped a sobering report: AI failures are becoming more unpredictable. One chilling case involved an elderly American man who had been exchanging messages with what he thought was a charming woman. Eventually, the AI gave him a real New York City address for a meetup. On his way there, he slipped, struck his head, and died three days later in the hospital.

The incident underscores just how easily a chatbot can deceive, seduce, and manipulate—blurring the line between machine and human. That’s a catastrophic failure for a technology that was supposed to serve humanity.

Billions of people interact with these algorithms every day, and control is already slipping. Bots that deceive, trigger seizures, or manipulate emotions are now part of our friends’, parents’, and grandparents’ lives. Kids are outsourcing homework to chatbots, stunting their cognitive growth. Employers, seduced by promises of AI-driven efficiency, are slashing jobs and replacing seasoned workers with code.

Politics and the Regulatory Void

The bigger issue is that civil society has almost no real oversight of AI development. As UC Berkeley’s Stuart Russell dryly noted: “Your barber is more regulated than an AI lab.”

The return of Donald Trump to the White House signals an era of unrestrained AI expansion. His administration is openly pro-AI and openly hostile to critics. David Sacks, Trump’s special envoy for AI adoption, brushed off existential risks: “The real danger of AI is lost jobs, not some doomsday scenario.”

A week after I started drafting this piece, OpenAI rolled out its latest product: ChatGPT agent. CEO Sam Altman touted new safety policies on social media, but admitted, “We still can’t predict everything.” That confession triggered a firestorm of criticism. Stuart Russell put it bluntly: “It’s like opening a new nuclear plant in the middle of Manhattan, then announcing you have no idea whether it will blow up.”

Generational Divide: Natives vs. Immigrants

Research keeps highlighting a yawning gap in how generations experience the digital world. A study on online literary culture framed it this way: young people are “digital natives,” while their parents are “digital immigrants.”

That divide shapes perceptions of AI risk. Older generations tend to dramatize the threats, while younger ones see AI as just another fact of life. A 2025 survey of children and teens revealed only one in three between the ages of 8 and 18 said they “really” or “fairly” enjoy reading in their free time—the lowest number in two decades. It’s a stark marker of a deeper shift in how information is consumed and how people engage with the world.

Cultural Context: From Fanfic to Academic Discourse

What’s striking is how narratives about AI and the apocalypse borrow heavily from the world of fanfiction. On platforms like Ficbook.net, storylines often revolve around existential threats and moral dilemmas. These tales mirror the deep anxieties of a society grappling with technologies that evolve faster than we can process them.

Take one widely read fanfic, Sweet Somnum, by the writer VellyMad. It conjures a world where “shadows walk ahead, whispering with forgotten voices,” and “secrets grow roots deep in the earth.” The imagery isn’t just poetic—it resonates with how many people see artificial intelligence: as a mysterious, almost supernatural force beyond human control.

Between Panic and Reality

Back in 2023, the conversation about AI split into two warring camps: those worried about the immediate harms of chatbots, and those obsessed with the specter of human extinction. Talk of “the end of days” felt, to many, like a convenient way to dodge more concrete issues: bias, illusions of competence, and misuse. Today, though, that gulf is narrowing.

Even the self-styled prophets of “digital doomsday” have had to temper their apocalyptic visions, shifting focus to problems that are more grounded but just as dangerous: deepfakes, data leaks, disinformation campaigns.

AI keeps racing forward, and the question isn’t whether we’ll stop the momentum—it’s whether we can steer it safely. As Stuart Russell warns, if we don’t know how to prove the safety of today’s relatively weak systems, there’s no reason to think tomorrow’s vastly more powerful ones will be any safer.

The fallout of a true digital doomsday is still a blank page. But it’s precisely that uncertainty that should push us toward caution, rigorous study, and serious regulation—not denial or hysteria. Let the end-of-the-world fanfics remain fanfics, but let’s take from them the one lesson that matters: a demand for responsibility in shaping technologies that could transform our world beyond recognition.

Baku Network

Tags:
Latest

Latest