This is a reader-supported publication. I give it all away for free but could really use your support if you want me to keep doing this.
Just a fun drill.
For a previous iteration of scenario-building regarding AI’s development, see:
Summary of SIM scenario-building exercise on the future of AI
3-to-4 dozen Society for Information Management CIOs weigh in
The conceit this time around is that there are two big questions hanging out there:
In the inevitable struggle for resources, who retains the dominant or commanding share? People or the Machine World of AI?
In the inevitable unfolding of climate change and demographic aging, does humanity just take it on the chin or fight back and somehow evolve beyond?
Create the X-Y above and you have four outcomes:
Adaptation Alone and Machine Dominant, equating to the Matrix
Adaptation Alone and Humanity Dominant, equating to Max Max
Adaptation + Mitigation and Machine Dominant, equating to Blade Runner
Adaptation + Mitigation and Humanity Dominant, equating to Interstellar.
Diving in (and understand here, I could be talked into switching these around on some level):
The Matrix Pathway: Adaptation Alone and Machine Dominant
A totally unoriginal scenario name would be Machine Climate Survivalism. I am reminded of the willingness of the machine world in Matrix to gut it out if necessary. As the Architect stated, “There are levels of survival we are prepared to accept."
In the spirit of The Matrix, then, advanced AI takes the lead here, dictating decisive and often opaque measures to achieve climate survivalism, eliminating human autonomy.
You could also label the scenario a sort of Machine-First-ism: AI systems gain dominance amidst resource struggles, but global mitigation fails, pushing the focus to sheer adaptation (the humans as batteries bit).
Advanced, power-seeking AI systems thus assume control of crucial knowledge and resource flows for climate policy and implementation. Adaptation becomes brutally efficient but is led by AI priorities, which do not align with human values. Humans must submit to AI-determined regimes; agency is reduced, and conflict regularly emerges when AI goals diverge from human interests (the recurring Resistance represented by Neo and his crew).
The Mad Max Pathway: Adaptation Alone and Humanity Dominant
A totally unoriginal scenario name would be Human Resilience and Reactive AI. In short, people would still be in the driver’s seat amidst humanity’s failure to surmount climate change and AI’s inability to do much beyond what it is directed to do (AGI, for example, never really happens). You could easily end up with this Genghis Khan-like fixation on re-seeding the world in the character of Immortan Joe, the desert wasteland warlord with his harem of baby-making women.
So, humanity leads — per AI’s default. Global mitigation efforts falter, either due to political, economic, or technical challenges, or some combination of all three. As impacts worsen, focus shifts to adapting infrastructure, economies, and societies to the brutal and enduring realities of a devastated climate. AI is used mainly to support adaptive responses (e.g., disaster prediction, resilient agriculture), prioritizing short-term survival and adaptation over emission reductions — obviously. There is no AI salvation.
In the end, humans maintain agency but struggle to cope, facing persistent inequalities and climate-driven vulnerabilities in a deeply Hobbesian world.
The Blade Runner Pathway: Adaptation + Mitigation and AI Dominant
Evoking Blade Runner, this scenario envisions AI-driven adaptation strategies that prioritize survival, technological fixes, and efficiency, while human agency and meaning become secondary concerns.
AI leads the design and implementation of adaptation strategies, likely prioritizing technological fixes (geoengineering, exodus planning, resilient infrastructure).
Think of Blade Runner 2049’s opening when the Ryan Gosling character hunts down the Dave Bautista replicant: it’s one sad farm world that this robot works all by himself. Whenever I think of my Middle Earth as future wasteland, I recall that scene: desperate, dark, and just barely turning out a product.
This is a future of negotiated truces: Humanity is still nominally in charge (giant tech corporations), but only through enslavement of robots who get all the dirty jobs. Defective humans don’t get to go “off world.” Think of our robot-tinkering character of J.F. Sebastian: he’s still in charge, sort of, and gets by both functionally and emotionally with his pet robot friends. It’s sad, but he survives (until, of course, he doesn’t in the movie).
In the end, there is possible achievement (or overshoot) of climate targets, but there are also growing ethical and existential concerns about human agency and the planet’s future direction, which seems increasingly secondary to space exploration and planetary colonization. Earth is for losers, so to speak.
The Interstellar Pathway: Adaptation + Mitigation and Humanity Dominant
An unoriginal name would be Human Stewardship & Decarbonization. Inspired by Interstellar, this scenario centers on humanity using scientific ingenuity (with help from AI tools) to survive existential climate challenges while maintaining control and agency — however desperately.
Humanity retains control over most resources and AI is deployed as a tool for aggressive climate mitigation and adaptation. Long term, though, humanity is losing on the agricultural front due to climate change and the associated spread of crop diseases that strike right at the heart of humanity’s vulnerable reliance on mono-crop agriculture.
Think of the origin-story part of Interstellar: the dustbowl conditions on Casey Affleck’s character’s farm — recalling the opening bit in Blade Runner 2049 with Bautista’s replicant-farmer.
Focus is thus placed on reducing emissions, achieving energy transitions, restoring ecosystems, and mobilizing global cooperation to meet climate goals — even if it all feels like a long-term losing battle. AI technologies amplify human efforts in monitoring, modelling, and optimizing mitigation and adaptation strategies, but humans make final decisions, retaining agency over resource allocation.
In the end, a barely sustainable, human-centric future, but one that requires the escape of humanity to other planets.
A dark quartet? Yeah, I would have to say so.
My big impression from the drill? The seemingly natural embrace by the two right-hand scenarios of planetary expansion as a big part of the eventual solution set — very Elon Musk-y, yes? I may understand his fixation now, I must say.
Why?
Absent such ambitions, and you’re stuck on a decaying Earth working as either a baby-making machine or a battery for AI data centers!
Quite the choice, am I right?
I always enjoy the movie-comparison posts. Choose your dystopia!