top of page

AI-mageddon: 2030 got cancelled

  • Katrina Ingram
  • 2 days ago
  • 5 min read

Are we living in the end times? This is a question that humans continue to ask themselves at major inflection points throughout history. AI is the scary new existential threat and the end is closer than you think - or so the authors of AI 2027 want us believe.


AI 2027 tells a story about our AI-future, peddled as an expert-informed forecast, complete with a fancy data-driven dynamic timeline infographic that make visible ‘the extensive research’ behind this project. It lays out two possible ways forward along the typically binaries of AI acceleration vs slow down, using a choose your own adventure ending to weigh the consequences of either approach. The authors state they are not necessarily recommending the slow down ending. However, in the ‘race’ or accelerationist ending, humanity is pretty much wiped out. So, draw your own conclusions! 


The thing that interests me most about this piece isn’t the content itself but who wrote it and what it seeks to do as a cultural artefact. It highlights who is empowered to shape and popularize a narrative. Sociologist Ruha Benjamin has characterized this idea as ‘who gets to define the future’ and has written her take on the importance of a collective and pluralistic vision in her book Imagination, a manifesto.


So, who is behind 2027?


The authors list a brief one-liner describing who they are individually as well as a slightly more in-depth bio. Collectively, they represent the usual suspects who agenda-set the future - a privileged group of W.E.I.R.D. racially homogeneous men - some with elite academic research credentials, others who just hang out in the right Silicon Valley circles. They’ve all drank from the same pitcher of AI safety Kool-Aid. Now, they have come together to share their wisdom about the AI future with the rest of us.


The work that this story does is pretty simple but also powerful. It declares that AI is inevitable - not just as a tool but as a superintelligence that will take over from humans. It paints a sino-phobic picture of why America needs to win the race (or at least convince others to do things their way). This line of thought is wrapped up in a history of American exceptionalism ( shades of Team America: World Police came to mind). Finally, irrespective of whether you are on team slow-down or team accelerate, there is also an underlying narrative that you need to get on-board the AI train now to secure as much of a positive future as possible for you and yours because we may only have five years left.


Daniel Kokotajlo, but not THAT Daniel Kokotajlo


Much of the authority invested in this piece seems to stem from that fact that it’s lead author, Daniel Kokotajlo, a former OpenAI researcher invested in AI safety, made some past predictions about AI that were apparently prescient according to a New York Times piece


While looking for more background on Kokotajlo, I stumbled upon an artist who shares the same moniker. Interestingly, this other Daniel is a former Jehovah’s witness and award winning filmmaker. For a minute, a pondered whether these could be the same person. I find it ironic that both Daniel Kokotajlo’s share a penchant for scary story telling and how these stories both have religious entanglements.



End Times Fascism


The end-times narrative is definitely in the cultural zeitgeist. A recent piece for the Guardian, The Rise of End-time Fascism, co-authored by Naomi Klein and Astra Taylor, explains that we should read these stories as secular versions of a deeply rooted and religious narrative. 


“Today, plenty of powerful secular people have embraced a vision of the future that follows a nearly identical script, one in which the world as we know it collapses under its weight and a chosen few survive and thrive in various kinds of arks, bunkers and gated “freedom cities”.

Who are these powerful people? Tech billionaires who own AI companies and the governments that they back. AI has both economic value to help drive trillions of dollars of value for these folks even if doing so might involve destroying the world in the process. They are bought into the story of AI 2027 - that we're on a doomsday course. Now, its just a matter of how much wealth can be extracted before the collapse and who will be on the rocket ship to galactic safety or hunkered down in the bunker on that private island.  


Extreme ideologies, as they relate to AI, have been extensively documented by Timnit Gebru and Emile Torres in their work on the TESCREAL bundle. Understanding those linkages, we can start to contend with the motivation behind the authors of AI 2027 who all have connections to aspects of TESCREAL and AI safety. You may be wondering - what is TESCREAL and why is AI safety a controversial topic? These are good questions - and a lot to unpack. To answer them with the nuance they deserve, I highly recommend reading Gebru and Torres work but in short - it comes back to the idea of a new techno-spin on eugenics and selecting a chosen few who will have a place in the techno-utopian future. It’s really a question of power. 


Staying with the AI Trouble


Will humans have a place in the future? This is the question that underpins the AI 2027 work and seeks to ignite our fears. But, a better question to focus on is “which humans and what future?” We should be interrogating the work that is being done by the AI 2027 website artefact to pre-ordain a story about two futures that both serve to structure an agenda for a particular set of outcomes that align with a particular ideological set of beliefs. As Dan MacQuillan writes in his piece Predictive Benefits, Proven Harms , the discourse surrounding AI is both ‘spectacle’ and ‘smokescreen’:


“From this perspective, AI is not a way of representing the world but an intervention that helps to produce the world that it claims to represent. Setting it up one way or another changes what becomes naturalised and what becomes problematised. Who gets to set up the AI becomes a crucial question of power.”

In other words, ask who benefits from this particular version of an AI story and how that story itself sets the wheels in motion for those futures. It's not so much about predicting a future, as it is about participating in creating one.


Resources


For a critical in-depth take on the content of AI 2027 check Mystery AI Hype Theatre 3000



By Katrina Ingram, CEO, Ethically Aligned AI

 

Ethically Aligned AI is a social enterprise aimed at helping organizations make better choices about designing and deploying technology. Find out more at ethicallyalignedai.com     

© 2025 Ethically Aligned AI Inc. All right reserved.



 
 
bottom of page