The Techno-Optimists: The Longtermists
Chapter 2, part 2 of Technological Metamodernism
Previously:
Reader: Last time we talked about e/acc and their enthusiasm for acceleration. But you mentioned they positioned themselves explicitly against another movement—effective altruism. I’m guessing there’s more to that story?
Hilma: Much more. And to understand the full landscape of techno-optimism, we need to explore what I consider e/acc’s sibling ideology: Longtermism.
First, let me clarify the relationship between Longtermism and effective altruism, because it’s crucial to understanding what’s going on here.
Effective altruism—EA—began around 2009 as a movement dedicated to using reason and evidence to do the most good possible. Initially, it focused on measurable, near-term interventions: preventing malaria deaths, giving cash directly to people in extreme poverty, reducing animal suffering. These were concrete problems with trackable outcomes.
But over time, many of EA’s leading figures shifted focus toward what they called “Longtermism”—the idea that influencing the far future should be our primary moral priority. By the late 2010s, Longtermism had essentially captured EA’s intellectual leadership and funding priorities. While EA as a movement is broader than Longtermism, when we talk about the techno-optimist camp that e/acc defines itself against, we’re really talking about EA’s Longtermist wing.
Reader: So Longtermism is like a faction within effective altruism that took over?
Hilma: That’s one way to put it. The key figures are often the same people who founded or popularised EA—philosophers like William MacAskill and Toby Ord at Oxford. But their focus shifted from “how do we help people now” to “how do we ensure humanity achieves its cosmic potential over trillions of years.”
This shift matters because it’s where the techno-optimism enters. Longtermists emerged from the same intellectual milieu as e/acc—Oxford philosophers, Silicon Valley money, concerns about humanity’s future—but reached radically different conclusions about what we should do.
Where e/acc says “accelerate everything and let the future sort itself out,” Longtermism says “slow down on certain technologies because the stakes are cosmically high.” Yet paradoxically, as we’ll see, they share far more in common than either side typically admits.
Reader: But it sounds like there’s a kind of battle going on? A fight over whether to build the AI god faster or slower?
Hilma: Right. And it certainly played out dramatically in late 2023 when Sam Altman was briefly ousted from OpenAI by board members sympathetic to AI safety concerns, only to be reinstated days later after massive pressure from investors and employees. The tech press framed it as a battle between “accelerationists” and “doomers”—between those who wanted to charge ahead with AI development and those who wanted to pump the brakes.
I should clarify that “doomer” is a somewhat pejorative term for Longtermists and others concerned about AI extinction risk. The accelerationists coined it to paint their opponents as pessimistic catastrophisers. Longtermists themselves prefer terms like “AI safety advocates” or simply describe themselves by their concern for existential risk. But “doomer” has stuck in popular discourse, partly because it’s punchier and partly because the accelerationists have been effective at framing the debate. It’s worth being aware that the language itself isn’t neutral—calling someone a “doomer” already stakes a claim about the reasonableness of their concerns.
What many miss is that the disagreement between these camps is remarkably narrow. Consider this revealing tweet from Emmett Shear, who briefly served as OpenAI’s interim CEO during the Altman drama:
“I’m a doomer and I’m basically e/acc on literally everything except the attempt to build a human level AGI”
Read that again. He’s a self-identified “doomer”—someone deeply concerned about AI extinction risk—and yet he agrees with e/acc about accelerating development of virtually every other technology. The split isn’t between fundamentally different worldviews about technology, progress, or humanity’s future. It’s a tactical disagreement about one specific technology: artificial general intelligence. On everything else—space colonisation, genetic engineering, nanotechnology, cognitive enhancement, spreading intelligence throughout the cosmos—they’re on the same page. As one critic aptly put it, this is a family dispute, not a fundamental clash of values.
Reader: A family dispute? That seems generous given how vitriolic the rhetoric gets.
Hilma: The most bitter conflicts often happen between those who share the most assumptions. Let me quote Émile Torres, who’s done important work mapping this ideological terrain:
“The doomers and accelerationists are just variants of [the same] movement. One is techno-cautious about certain advanced technologies like AGI, while the other is techno-optimistic. Both are part of the very same hyper-capitalist, techno-utopian tradition of thought that has roots in transhumanism and has become pervasive within Silicon Valley over the past 20 years. This is why the quarrels between these camps should be seen as mere family disputes.”
This isn’t a fundamental clash of worldviews. It’s a tactical disagreement within a shared worldview about how to handle a specific technology at a specific moment in time.
Reader: Can you unpack Longtermism a little more? You’ve mentioned it, but I don’t have a clear picture yet.
Hilma: Longtermism is the philosophical position that positively influencing the long-term future—we’re talking thousands, millions, even trillions of years from now—is the key moral priority of our time. Not a priority. The priority.
The argument goes something like this: There are potentially vast numbers of people who will exist in the future. If humanity survives and spreads throughout the universe, we could have 10^54 or more future people living flourishing lives. The moral weight of all those potential future lives completely dwarfs the moral weight of the 8 billion people alive today. Therefore, the most important thing we can do is ensure those future people get to exist—which means preventing human extinction at all costs.
Reader: That’s... quite a leap. I mean, I care about future generations, but are we really supposed to prioritise people who don’t exist yet over people suffering right now? How do they justify that?
Hilma: Through what philosophers call aggregative consequentialism—basically, adding up all the good and bad consequences of an action to determine if it’s right. If you believe that all lives count equally regardless of when they exist, and if you believe there will be vastly more future people than present people, then the math supposedly checks out: optimising for the long-term future produces the most total good.
But here’s where it gets even more interesting. Remember how e/acc talks about spreading intelligence throughout the cosmos? Longtermists have essentially the same vision. MacAskill writes: “If future people will be sufficiently well-off, then a civilisation that is twice as long or twice as large is twice as good.” He explicitly makes “a moral case for space settlement” because “the future of civilisation could be literally astronomical in scale.”
Reader: Wait, so they both want to colonise space and create vast numbers of future beings? Then what are they even arguing about?
Hilma: Primarily about risk assessment and timing. Longtermists look at current AI development and see a potential extinction risk—something that could prevent all those trillions of future people from ever existing. They use the term “existential risk” to mean anything that could permanently curtail humanity’s potential, and they estimate the risk of AI causing human extinction this century at somewhere between 1 in 10 to 1 in 6, depending on which expert you ask.
e/acc proponents look at the same situation and estimate the risk as essentially zero. They think the “doomers” are catastrophising based on speculative scenarios, and that attempts to slow AI development are themselves the real risk—because they’ll hold humanity back from achieving that glorious cosmic future.
Reader: So both sides agree we need to get to the cosmic future, they just disagree about how likely it is that AI will kill us along the way?
Hilma: Essentially, yes. And it’s a framing that squeezes out space for more mundane but important questions about how AI should be integrated into society, who benefits from it, who’s harmed by it, and how to ensure it serves broadly shared human values. Questions about labor displacement and economic inequality. Questions about surveillance and privacy. Questions about algorithmic bias and discrimination. Questions about the concentration of power in a handful of tech companies. Questions about cultural homogenisation and the erasure of diverse ways of being.
These aren’t “cosmically important” questions in Longtermist terms, because they don’t affect whether 10^54 beings exist trillions of years from now. But they matter enormously to actual people living actual lives right now.
Reader: This all feels very abstract and philosophical. What does this actually mean for how AI development is happening right now?
Hilma: These philosophical positions translate directly into funding decisions, company policies, and lobbying priorities. OpenAI was founded partly by people inspired by Longtermist concerns about AI risk—they wanted to ensure that artificial general intelligence would be developed “safely.” That’s why they started as a nonprofit focused on safety research.
Then they created a for-profit subsidiary to raise capital for the enormous compute costs of training cutting-edge models. Then they took billions from Microsoft. Then they launched ChatGPT and triggered an AI arms race. Now they’re racing to build AGI faster than their competitors, with “safety” concerns taking a back seat to the imperative to ship products and maintain their lead.
Meanwhile, key figures have splintered off to found competing AI labs—Anthropic, founded by former OpenAI researchers who thought OpenAI wasn’t taking safety seriously enough. But Anthropic also needs to raise money, ship products, and compete. The same dynamics play out.
The people running these labs are generally thoughtful, well-intentioned individuals who genuinely worry about AI risk. But they’re embedded in competitive dynamics that push toward exactly the race-to-the-bottom scenario that Longtermists feared. The Longtermist funding and intellectual framework helped create the very AI labs whose competitive dynamics are now accelerating development in potentially dangerous ways.
Reader: That’s... darkly ironic. So the people most worried about AI risk helped create the conditions for the AI race they were worried about?
Hilma: Precisely. And this isn’t the first time. In the 2000s and early 2010s, many of the same people and organisations were raising awareness about AI risk, trying to convince the world that artificial general intelligence was both possible and important to work on. They funded conferences, published papers, created research institutes.
Their awareness-raising campaign worked—perhaps too well. They convinced technologists and investors that AGI was achievable and would be immensely powerful. This helped inspire the founding of DeepMind, which Google acquired for hundreds of millions. It helped inspire OpenAI. It created a talent pipeline funneling researchers into AI labs.
By making AGI seem real, achievable, and supremely important, they helped catalyse the very acceleration they now claim to oppose.
Reader: So what are they doing now? Surely they’ve learned from these mistakes?
Hilma: They’re doing essentially the same thing, but with governments instead of tech entrepreneurs. The new strategy is to convince policymakers that AI is too important and dangerous to leave unregulated—that governments need to step in and control AI development to prevent catastrophe.
But here’s the problem: if they succeed in convincing the US government that AI is existentially important and needs to be controlled, what do you think the government will do? Will it pause development? Unlikely. More probably, it will do what governments have done with every powerful technology in the past century—take control of it for military and strategic purposes.
Reader: Like the Manhattan Project?
Hilma: Exactly. When the US government became convinced that nuclear weapons were feasible and strategically critical, it didn’t slow or pause development. It accelerated it, poured enormous resources into it, and subordinated all other concerns to the goal of getting the bomb before the Nazis did.
If Longtermists succeed in convincing policymakers that AGI is real, achievable, and supremely important, they’re likely to trigger another race—only this time between nations rather than companies. The US might restrict private AI development while massively accelerating government-funded AI research for military applications. China and other nations would respond in kind.
This is the exact opposite of what Longtermists claim to want. But it’s the predictable outcome of their strategy, for exactly the same reasons as last time.
Reader: So are you saying that both Longtermism and e/acc are accelerating AI development, just through different mechanisms?
Hilma: That’s precisely what I’m saying. e/acc accelerates directly through ideological justification—telling technologists that building more powerful AI is not just acceptable but morally mandatory. Longtermism accelerates indirectly through advocacy that ends up creating the conditions for AI races, both commercial and governmental.
Both are embedded in what we might call “inevitabilist” thinking—the assumption that AGI is coming no matter what we do, so the only question is how to navigate that inevitability. This assumption becomes self-fulfilling. By treating AGI as inevitable and supremely important, they motivate exactly the concentrated effort that makes it more likely to happen.
Reader: So where does this leave us? If both the “accelerate” and “pause” camps are problematic, what’s the alternative?
Hilma: This is where we need to step back and examine their shared assumptions rather than getting caught up in their tactical disagreement. What if the entire framing is misguided? What if prioritising hypothetical far-future beings over present ones is a philosophical mistake? What if the cosmic vision of spreading intelligence throughout the universe is more ego-driven fantasy than meaningful goal?
What if, instead of asking “how do we ensure humanity achieves its cosmic potential,” we asked “how do we ensure technology serves human flourishing in all its messy, embodied, present-tense reality”?
Reader: That’s the metamodern approach you’ve been hinting at?
Hilma: It’s part of it. Technological metamodernism doesn’t just split the difference between techno-skepticism and techno-optimism. It questions the terms of the debate itself. It asks why we’ve allowed the conversation about technology to be dominated by extremes.
Before we articulate what a metamodern approach looks like in detail, we need to understand what technology actually is—not the breathless visions of either camp, but a more grounded, historically-informed understanding of technology as a phenomenon. That’s what we’ll explore in the next chapter, as we develop the conceptual foundations for a genuinely metamodern approach to technology—one that can navigate between naive optimism and cynical pessimism, between reckless acceleration and fearful stagnation, between cosmic fantasies and paralysed presentism.
For now, I want you to sit with the realisation that the most visible debates about technology—between those who want to go faster and those who want to slow down—might be distracting us from more important questions entirely.



so curious -- felt a desire to be a part of a conversation, to talk to Hilma. perhaps could be an experiment to run too: "speak to Hilma" software experiential.
i am looking forward to deepening my understanding of what technological metamodernism is, Steven and Hilma