Previously: Introducing Hilma Church-Turing
Reader: So Hilma, this book is about "Technological Metamodernism." Before we get to the metamodern part, I'm guessing we need to understand the different perspectives on technology today. Let's start with the skeptics—though I have to say, in my circles, being skeptical about technology feels almost... retro? Like worrying about television in the 90s.
Hilma: There's certainly a temptation to dismiss tech skepticism as merely the latest iteration of cultural conservatism—the same anxiety that has greeted every technological revolution from the printing press to the smartphone. But what we're witnessing now runs deeper.
The Cambridge Dictionary added a term to capture this phenomenon: "techlash," defined as "a strong negative feeling among a group of people in reaction to modern technology and the behaviour of big technology companies." This isn't just scattered concern from digital immigrants struggling to adapt; it's a coordinated recognition that something fundamental has shifted in our relationship with technology and the companies that control it.
Reader: I've heard that term. But isn't this just the pendulum swinging back after all the techno-utopianism of the early internet era? Markets correct, public opinion shifts—it seems pretty normal.
Hilma: What's fascinating is how rapidly the pendulum swung. The Financial Times named "techlash" its word of the year in 2018, noting that "Silicon Valley was for years the Teflon industry—beloved by equity markets, politicians and the public alike. No more." In a remarkably short period, tech went from being seen as our salvation to our subjugation.
The Economist captured the emerging critique with the acronym "BAADD"—the belief that tech companies are too big, anti-competitive, addictive, and destructive to democracy. But beneath these specific complaints lies something more profound: a dawning recognition that these aren't just companies selling products but architects of our social reality itself.
Reader: I can see that. Facebook did go from "connecting the world" to being blamed for everything from teenage depression to election manipulation pretty quickly. But is this just about a few problematic platforms, or are you suggesting something more systemic?
Hilma: That's precisely where the most incisive critiques lead us. What if the problems we're witnessing aren't just bugs but features of a new economic system altogether? This is where I find Yanis Varoufakis's concept of "technofeudalism" particularly illuminating. He argues we've moved beyond capitalism into something structurally different—not merely capitalism with digital characteristics, but a fundamentally new arrangement of economic power.
Reader: Technofeudalism? That sounds... dramatic. I mean, I still get a paycheck, buy stuff on Amazon, invest in stocks—that's capitalism, right?
Hilma: Those activities exist, certainly, but the underlying dynamics have transformed. Let me explain how technofeudalism differs from capitalism in ways that might not be immediately visible but profoundly shape our lives.
Under industrial capitalism, the core dynamic was the extraction of surplus value from labour. A capitalist would hire workers, pay them less than the value they created, and pocket the difference as profit. The key relationship was between capital and labour.
But look at the dominant tech platforms today. How do they generate value? Not primarily by employing labourers who produce goods, but by extracting, processing, and monetising data generated by users who aren't formally employed by them at all. Facebook doesn't pay you for your posts, photos, and interactions, yet these become the raw material from which they generate billions in advertising revenue.
Reader: But I choose to use Facebook. Nobody's forcing me to post photos or scroll through my feed. How is this feudal? Isn't feudalism about serfs bound to the land, unable to leave?
Hilma: The parallel isn't about physical bondage but about economic structure and power relations. Medieval feudalism was characterised by lords who owned the land (the primary means of production) and serfs who worked it, surrendering a portion of what they produced to the lord in exchange for protection and the right to sustain themselves.
In technofeudalism, the digital platforms are the new "land"—the essential infrastructure for social and economic life. We surrender our data and attention to the tech lords not because we're physically forced to, but because practical participation in modern society increasingly requires it. Try applying for jobs without LinkedIn, coordinating with colleagues without Gmail, or maintaining friendships without social media. Technically possible, practically limiting.
What makes this feudal rather than capitalist is that the primary extraction isn't happening through the wage-labour relationship but through a kind of digital rent. We pay not with money but with the continuous surrender of our behavioural data, which is then used to modify our behaviour in ways that generate profit.
Reader: Okay, I can see some parallels, but feudalism seems like such an archaic reference. What specifically makes our current arrangement "feudal" rather than just "digital capitalism"?
Hilma: Three key features distinguish technofeudalism from capitalism:
First, the means of production have fundamentally changed. In capitalism, factories, machines, and raw materials were the crucial productive assets. In technofeudalism, the critical infrastructure is the digital platform itself—the algorithmic architecture that captures, processes, and monetises our data. This infrastructure isn't just a tool; it's a terrain that shapes what kinds of economic activities are possible and how they unfold.
Second, value extraction has shifted from exploitation of labour to exploitation of behaviour. Traditional capitalism said: "Work for me, and I'll pay you less than the value you create." Technofeudalism says: "Live your life on my platform, and I'll monetise your behaviour patterns." The extraction is no longer contained within working hours but extends to our entire existence.
Third, and perhaps most importantly, we're seeing the privatisation of market-making itself. Amazon isn't just a company operating within a market; it is the market. It determines which products are visible, which sellers succeed, which prices are set. Facebook doesn't just sell advertising; it creates the entire ecosystem in which digital attention is allocated. These platforms don't just play the game; they own the playing field, write the rules, referee the match, and take a cut of every transaction.
Reader: That's a compelling framework, but I'm still not convinced it's a whole new system. Couldn't this just be the latest evolution of capitalism? After all, capitalism has continually morphed throughout history.
Hilma: You're raising an important point about continuity versus rupture. Certainly, there are elements of continuity with capitalism—private ownership still dominates, profit remains a primary motive, markets still exist (albeit increasingly captured within private platforms).
But I think Varoufakis is right that we've crossed a threshold where quantitative changes have led to a qualitative transformation. Consider how power operates in this new system. Under capitalism, wealth was translated into power through ownership of physical capital, which then required labour to activate. There was an interdependency—capitalists needed workers, creating a site of contestation and negotiation.
In technofeudalism, the tech lords possess a different kind of power—algorithmic governance. They can modify behaviours, shape preferences, and steer social interactions at scale without requiring much human labour at all. The ratio of capital to labour in these companies is staggering—WhatsApp had just 55 employees when Facebook acquired it for $19 billion.
This shift changes the nature of social struggle. Labour unions made sense when workers could withdraw their labour to pressure capitalists. But what leverage do users have against platforms? Delete your account, and you're just erasing yourself from the social world while the platform continues unaffected.
Reader: I'm still struggling with the technofeudalism concept. If big tech companies are the new feudal lords, who exactly are the serfs in this analogy?
Hilma: You're right to probe the analogy. No historical parallel is perfect, and there are important differences between medieval feudalism and our emerging technofeudal order.
In medieval feudalism, the primary economic relationship was indeed between lords and serfs, with a relatively small merchant class engaging in market activities. In technofeudalism, we see a more complex stratification.
At the top are the tech lords—platform owners who control the digital infrastructure and extract rent from all activity within their domains. Below them are various tiers of what we might call "digital vassals"—businesses completely dependent on platforms for their viability. Think of Amazon sellers, YouTube creators, or Uber drivers. They aren't employees, but neither are they truly independent. They operate at the pleasure of the platforms, surrendering a significant portion of the value they create in exchange for access to the digital terrain.
Then there's the broad mass of users—digital subjects who produce data through their everyday activities. We aren't bound to the land like serfs, but we are increasingly bound to these platforms by necessity. As more of social and economic life moves online, opting out becomes less viable.
There's also a growing class of actual labourers—content moderators, warehouse workers, delivery drivers—who perform the physically demanding work that keeps the digital economy functioning. These workers often face precarious conditions despite their essential role.
Reader: That makes sense. So it's not a perfect one-to-one mapping with medieval feudalism, but a rhyming pattern of centralised power and distributed dependency. But these companies still need us, right? If everyone left Facebook, it would collapse. We have some power.
Hilma: In theory, yes. But in practice, the coordination problem is immense. These platforms benefit from network effects—the more people use them, the more valuable they become to each user. This creates a powerful lock-in that makes collective exodus nearly impossible without some external organising structure.
More importantly, they've become infrastructural—woven into the fabric of how society operates. When Facebook goes down for a few hours, small businesses lose sales, people lose contact with loved ones, and information stops flowing. We've built our lives around these platforms not because they're perfect, but because they've become as essential as electricity or roads.
And that brings us to another feudal parallel: the merging of economic and political power. Medieval lords didn't just own land; they governed the people who lived on it. Today's tech giants don't just own platforms; they govern vast digital territories with their own rules, enforcement mechanisms, and dispute resolution systems. Facebook's Oversight Board has been called a "Supreme Court" for a reason—it's effectively a private governance system for a population larger than any nation-state.
Reader: Hmm, I can see why this is concerning. But don't governments still have the ultimate authority? They could regulate these companies if they wanted to.
Hilma: In principle, yes. But technofeudalism complicates this in several ways.
First, these platforms operate globally while regulation remains primarily national. A company can threaten to withdraw services or investment if a particular country's regulations become too onerous.
Second, there's the issue of capture—both intellectual and material. Regulators often lack the technical expertise to effectively govern these systems, relying on the very companies they're regulating for information and guidance. And the revolving door between industry and government creates material incentives for regulatory sympathy.
Third, and most insidiously, is what I'd call infrastructural leverage. When a system becomes essential enough, threatening it becomes unthinkable. We've seen this with banks deemed "too big to fail." Tech platforms are approaching a similar status—too embedded to regulate meaningfully without disrupting basic social functions.
Reader: This all sounds rather hopeless. If we're truly in a technofeudal system as you describe, what can be done? Are we just... stuck?
Hilma: The recognition of technofeudalism isn't a counsel of despair but a necessary recalibration of our understanding. Medieval feudalism lasted centuries but eventually gave way to new forms of economic organisation. Systems that appear eternal in their moment prove transient in historical perspective.
What's crucial is that we accurately diagnose our condition. If we keep applying capitalist remedies—like antitrust law designed for industrial monopolies—to technofeudal problems, we'll miss the mark. We need new conceptual frameworks and new forms of collective action suited to this emerging reality.
But before we explore alternatives, let's examine some other strands of tech skepticism that complement the technofeudal critique.
Reader: Like what? What other major critiques are out there beyond this feudalism angle?
Hilma: Shoshana Zuboff's analysis of "surveillance capitalism" overlaps significantly with technofeudalism but emphasises the behavioural modification aspect. She argues that Google and Facebook pioneered a new economic logic where human experience is claimed as free raw material for translation into behavioural data, which is then used to build prediction products that anticipate what you will do now, soon, and later.
The power of these systems lies not just in knowing what we've done but in predicting and shaping what we will do. As she puts it in "The Age of Surveillance Capitalism," we face "a new form of power marked by extreme concentrations of knowledge and free from democratic oversight."
Then there's the critique of technological solutionism, articulated by Evgeny Morozov in "To Save Everything, Click Here." He argues that the tech industry has a dangerous tendency to frame complex social problems as neatly defined issues that can be solved through computing. This "solutionist" mindset reduces human complexity to optimisation problems, leading to technological interventions that might fix a narrow issue while creating new problems or eroding important social values.
Reader: Wait, "solutionism"? Isn't solving problems a good thing?
Hilma: The issue isn't with solving problems per se, but with how problems are defined and solutions implemented. Take content moderation on social media. The "problem" can be framed as "harmful content exists online," with the "solution" being algorithmic filtering. But this technical framing ignores deeper questions: Who decides what's harmful? What values guide these decisions? What are the effects of outsourcing moral judgment to automated systems?
Solutionism often depoliticises inherently political questions, presenting technical fixes for what are fundamentally issues of values, power, and social organisation. It's a particularly seductive ideology because it promises clean, efficient solutions without the messiness of democratic deliberation.
Another important strand comes from critics like Douglas Rushkoff, who in "Throwing Rocks at the Google Bus" argues that we've programmed our economy for growth at all costs, treating technology as a tool for extracting value rather than creating it. He suggests that digital technology could have ushered in a new age of distributed prosperity, but instead it's been used to "put industrial capitalism on steroids."
On democracy, Jamie Bartlett's "The People Vs Tech" offers a stark assessment of how digital technology is undermining democratic institutions. He argues that the internet and big tech companies are eroding the middle class, weakening sovereign authority, and degrading civil society—all essential foundations of democracy.
These concerns are echoed by insiders who've become critics. Roger McNamee, an early investor in Facebook and mentor to Mark Zuckerberg, chronicles his journey from tech champion to critic in "Zucked: Waking Up to the Facebook Catastrophe." His account is particularly powerful because it comes from someone who once believed deeply in Facebook's mission and potential.
McNamee describes his growing alarm as he recognised how the platform was being manipulated by malicious actors and how Facebook's leadership seemed unwilling or unable to address these problems. His insider perspective reveals how the company's business model—based on engagement and growth at all costs—created vulnerabilities that threatened democratic processes and social cohesion.
Reader: So far we've talked about big structural critiques—technofeudalism, surveillance capitalism, solutionism, democracy. But what about the more immediate concerns? The effects on our minds, our attention spans, our social relationships?
Hilma: Those immediate experiential impacts are indeed crucial, and several important critics focus there. Nicholas Carr's "The Shallows" examines how internet use is rewiring our brains, training us to process information in shallow, scattered ways that make sustained attention and deep thought increasingly difficult.
He argues that every information technology carries an intellectual ethic—a set of assumptions about knowledge and intelligence. While the printed book encouraged deep concentration and linear thinking, the internet promotes rapid sampling of small bits of information from many sources. We're becoming excellent at scanning and skimming but losing our capacity for concentration and contemplation. Johann Hari extends these ideas in "Stolen Focus," arguing that our ability to pay attention is being systematically undermined by technologies designed to capture and monetise our attention.
Concerns about cognitive impacts are now being validated by emerging neuroscience research. A 2023 study titled "Your Brain on ChatGPT" examined the neural consequences of using AI assistants for writing tasks. Using EEG measurements, they found that people who relied on LLMs consistently showed weaker brain connectivity patterns compared to those who wrote without technological assistance. When regular LLM users were suddenly asked to complete tasks without AI help, they displayed reduced cognitive engagement, suggesting a form of 'cognitive debt' that accumulates with AI dependence. This research provides empirical evidence for what critics like Carr have been arguing—that outsourcing cognitive labor to technology may be fundamentally altering our neural capabilities, potentially diminishing our ability to engage in deep thinking when technology isn't available.
Jaron Lanier, one of virtual reality's pioneers, has become one of tech's most thoughtful critics. In "Ten Arguments for Deleting Your Social Media Accounts Right Now," he argues that social media's business model is fundamentally incompatible with human flourishing. The imperative to maximise engagement leads to algorithms that amplify the most emotionally triggering content, effectively programming us to be angry, outraged, and polarised.
Similarly, Sherry Turkle's "Alone Together" explores how technology reshapes our relationships. She observes that digital connections promise intimacy without the demands of friendship—we can engage when convenient and withdraw when not. The result is a paradoxical combination of constant connection and persistent loneliness.
Reader: This is all quite bleak. Do any of these critics see positive paths forward, or are they just documenting our descent into a digital dystopia?
Hilma: Most offer some vision of alternative possibilities. Rushkoff outlines practical steps for businesses, consumers, and policymakers to "reprogram" the economic operating system from the inside out. Lanier argues for business models where users pay for services directly rather than with their data and attention. Zuboff calls for new legal frameworks that protect human experience from commercial surveillance.
Others take more radical stances. Dougald Hine's "At Work in the Ruins" and Vanessa Machado De Oliveira's "Hospicing Modernity" acknowledge the likelihood of systemic collapse but seek ways to create meaning and connection within that recognition.
Reader: Some fascinating if unsettling metaphors there. But where does Technological Metamodernism fit into all this? Is it just another critique, or something else?
Hilma: Technological Metamodernism isn't merely another critique to add to the pile, nor is it a simple rejection of these critiques in favour of techno-optimism. It's an attempt to transcend the binary thinking that pits technological progress against human values.
The techno-skeptical perspective provides crucial insights into the problems we face. The techno-optimistic view, which we'll explore next, offers important perspectives on technology's potential. Technological metamodernism seeks not to choose between these positions but to oscillate between them, holding their tensions in a productive synthesis.
But before we get there, we need to understand the opposing pole—the techno-optimistic perspective that sees technology as our path to salvation rather than subjugation. That's where we'll turn next.
Reader: Good. After all this doom and gloom, I'm curious to hear the counterarguments. Because despite all these concerns, I still use my smartphone, still order from Amazon, still post on social media. There must be something these technologies are giving us that we value, despite the costs.
Hilma: Absolutely. And that paradox—the simultaneous recognition of both the liberating and subjugating aspects of technology—is precisely where metamodernism begins. It's not about resolving the contradiction but inhabiting it consciously. The key insight is that neither unbridled techno-optimism nor dogmatic techno-skepticism alone can guide us through our digital future. We need a more nuanced position that acknowledges the validity of both perspectives while moving beyond their limitations.
But first, let's hear what the optimists have to say.
The techlash against Amazon, Facebook and Google—and what they can do
A brutal year: how the 'techlash' caught up with Facebook, Google and Amazon | Technology
The People Vs Tech: How the Internet Is Killing Democracy by Jamie Bartlett
Zucked: Waking Up to the Facebook Catastrophe by Roger McNamee
Throwing Rocks at the Google Bus: How Growth Became the Enemy of Prosperity by Douglas Rushkoff
The Shallows: What the Internet Is Doing to Our Brains by Nicholas Carr
Stolen Focus: Why You Can't Pay Attention— and How to Think Deeply Again by Johann Hari
Alone Together: Why We Expect More from Technology and Less from Each Other by Sherry Turkle
Ten Arguments for Deleting Your Social Media Accounts Right Now by Jaron Lanier
Breaking Together: A freedom-loving response to collapse by Jem Bendell
To Save Everything, Click Here: The Folly of Technological Solutionism by Evgeny Morozov
New Dark Age: Technology and the End of the Future by James Bridle
Share this post