What is Technology?
Chapter 3 of Technological Metamodernism
Previously:
Reader: Alright, we’ve surveyed the skeptics and the optimists. Both camps seem to have strong opinions about technology and where it’s heading. But I’m realizing something: we’ve been arguing about technology for two chapters now, and I’m not entirely sure we’ve actually defined what we mean by it. Are we just talking about computers and AI? Or something broader?
Hilma: You’ve identified the crucial question—and it’s telling that we can have heated debates about technology’s future without first establishing what technology actually is. This definitional vagueness isn’t accidental. It allows each camp to strategically emphasize whichever aspects of technology support their position while ignoring the inconvenient complexities.
The word “technology” comes from the Greek technē (craft or art) and logos (word or reason)—literally, “the systematic treatment of an art or craft.” But in contemporary usage, “technology” has become a floating signifier that can mean almost anything we want it to mean. Sometimes it refers narrowly to digital devices and software. Other times it encompasses all human-made tools and systems. Sometimes it’s treated as a force external to society that “impacts” us. Other times it’s seen as an expression of human will and social relations.
This conceptual murkiness matters because how we define technology shapes what kinds of questions we can ask about it and what kinds of interventions seem possible.
Reader: So give me a proper definition then. What is technology?
Hilma: I’m going to resist the temptation to offer a single, tidy definition. Instead, let me introduce you to a framework that might help us think about technology more precisely: the civilizational tech-stack model developed by the Consilience Project.
This model identifies five overlapping layers at which technology operates in society: tools, technologies (in a narrower sense), ecologies of technologies, infrastructures, and technological epochs. Each layer builds on and interacts with the others, creating the complex socio-technical systems we inhabit.
Reader: A tech-stack? That sounds very... Silicon Valley. Are we about to get into software architecture?
Hilma: The metaphor is borrowed from software, yes, but it’s being used to describe something much broader and more fundamental. Think of it as a way to understand how simple human artifacts scale up into civilizational-level systems that shape everything from our daily routines to our sense of what’s possible.
Let’s start at the bottom: tools. These are human-scale artifacts—rocks, axes, forks, writing implements—that augment individual and social practices. Tools can be found or made, simple or complex, but they remain directly graspable and usable by individuals.
Reader: Okay, that seems straightforward. A hammer is a tool, a pen is a tool. What’s the next level?
Hilma: The next layer is technologies in a more specific sense—the application of complex, often scientific knowledge to problem-solving, embedded in artifacts that require engineering: waterwheels, steam engines, light bulbs, refrigerators. These are too complicated to be understood or made by most individuals without specialized knowledge.
Notice the shift here. A stone axe is a tool—any reasonably clever person can make and use one. A refrigerator is a technology—it requires systematic knowledge of thermodynamics, electrical engineering, and manufacturing processes that no individual possesses in totality.
Reader: So tools are simple and technologies are complicated?
Hilma: That’s one distinction, but there’s something more subtle happening. Tools tend to be relatively transparent in their operation and purpose. You can understand what a hammer does by looking at it and using it. Technologies often involve hidden complexity—mechanisms and principles that aren’t evident from observation alone.
But here’s where it gets really interesting: technologies don’t exist in isolation. They form ecologies of technologies—sets of technologies that are symbiotically related and co-evolving as nested functional units. A light bulb requires a lamp, power lines, transformers, and a power station. A computer with a microchip at its core needs a hard drive, screen, mouse, modem, broadband, and server banks.
Reader: So it’s like... you can’t really have a smartphone without cell towers, data centers, semiconductor fabs, and the entire global supply chain behind it?
Hilma: Exactly. And this is where the tech-optimist and tech-skeptic narratives both tend to falter. They often discuss technology as if innovations emerge in isolation—as if we could simply “add” self-driving cars or artificial intelligence to society and assess their impact independently. But technologies are always embedded in ecologies. You can’t understand what a smartphone “does” without understanding the entire ecology it’s part of—including the labor practices in cobalt mines, the behavioral psychology built into app design, the advertising-driven business model of digital platforms, and the geopolitical competition over semiconductor manufacturing.
Above ecologies, we find infrastructures—multiple different ecologies of technology embedded together to form a basic part of social coordination and material reproduction within a society. Supply chains, transportation systems, markets, communication systems—these are infrastructures.
Reader: So infrastructure is like the operating system that society runs on?
Hilma: That’s a useful analogy, actually. Infrastructure is often invisible until it breaks down—we don’t think about the electrical grid until the power goes out, or the water system until there’s a drought. But infrastructure fundamentally shapes what’s possible in a society. The presence or absence of certain infrastructures determines which economic activities are viable, which social arrangements are practical, which ways of life are sustainable.
And this brings us to the highest level: technological epochs—durations of historical time characterized by specific suites of infrastructures that interrelate as the foundation of a social system. Epochs are marked by discontinuous breaks from prior infrastructures and the emergent social dynamics resulting from new ones: pre-industrial, industrial, post-industrial.
Reader: Okay, this is helpful. But I’m still not sure I see why we need all these layers. Can’t we just say “technology is tools and machines”?
Hilma: We could, but we’d miss something crucial: technologies are created and evolve together to form whole civilizations. It’s important not to consider any technology as existing all by itself. When we say “the internet changed society,” we’re not just talking about a single technology. We’re talking about a shift across multiple levels simultaneously—new tools (smartphones, laptops), new technologies (fiber optic cables, server farms), new ecologies (social media platforms, cloud computing, streaming services), new infrastructures (global communication networks, digital payment systems), and potentially a new epoch.
The tech-stack model helps us see that debates about whether “technology is good or bad” are misconceived. A hammer—a tool—can be used to build or destroy. But infrastructures and epochs aren’t neutral in the same way. They create particular affordances and constraints, particular distributions of power and possibility. They make certain forms of life easier and others harder, often in ways that aren’t immediately visible.
Reader: You’re saying that technology isn’t just the tools we use, but something that shapes us even as we shape it?
Hilma: Precisely. And this insight actually has deep roots in 20th-century thinking about technology. Marshall McLuhan captured this with his famous phrase “the medium is the message”—meaning that the form of a technology matters more than its content. McLuhan and Quentin Fiore illustrated how media and technology don’t just convey information—they fundamentally reshape how we understand ourselves and our world, often in ways that have nothing to do with the ostensible purpose of the technology.
Reader: Can you give me a concrete example?
Hilma: Consider writing. The ostensible purpose of writing is to record and communicate information. But writing as a technology did something far more profound—it externalized memory, changed the structure of thought, enabled new forms of governance and commerce, and ultimately transformed human consciousness itself.
This is where we need to introduce another crucial concept: psychotechnology.
Reader: Psycho-technology? That sounds ominous.
Hilma: It shouldn’t. The term, as developed by cognitive scientist John Vervaeke, refers to mental, embodied, and pharmacological tools used to achieve insight, self-transcendence, and the cultivation of wisdom. Mental psychotechnologies include speech, literacy, numeracy, metaphor, meditation, and spiritual practices. Embodied psychotechnologies include fasting, yoga, martial arts, and breathing techniques. Pharmacological psychotechnologies are compounds that modify cognition, from caffeine to cannabis to psilocybin.
Reader: So psychotechnologies are technologies for the mind?
Hilma: More than that—they’re technologies that constitute the mind. This is a radical claim that deserves unpacking. We tend to think of our minds as given, natural things, and technology as external tools we use. But many of our most fundamental cognitive capacities are actually the result of psychotechnological innovation.
Take literacy. During the Bronze Age collapse around 1177 B.C., literacy was rare and writing systems were clunky. The learning curve was steep, making literacy a full-time job for scribes. It wasn’t until the Phoenicians developed alphabetic literacy—paring down 1,000 hieroglyphs to around 20 letters—that the learning curve was cut dramatically. This made writing catchy and useful, and literacy went viral. More readers increased distributed cognition, expanding personal and cultural development.
Reader: But surely learning to read is just... learning to read? How is that a technology?
Hilma: Because literacy isn’t discovering a pre-existing natural capacity—it’s installing new cognitive machinery. At first you’re hyper-aware of the new skill, like when you first wear glasses. But before long, you see through literacy natively and deploy your new vision toward greater ends. These psychotechnologies become internalized into the fabric of cognition, becoming our cognitive grammar through which we view the world.
Consider: there’s no part of the brain naturally dedicated to reading. Reading recruits and repurposes neural circuits that evolved for other purposes—visual processing, language comprehension, motor control. When we learn to read, we’re literally rewiring our brains. The literate brain is structurally different from the non-literate brain.
The key insight is that psychotechnologies don’t just help us think better—they change what kinds of thinking are possible. An illiterate person isn’t just “less educated” than a literate person—they inhabit a fundamentally different cognitive world. Their memory works differently, their sense of time works differently, their relationship to knowledge and authority works differently.
Reader: Okay, I think I’m getting it. But what does this have to do with our current technological moment? I mean, literacy is thousands of years old.
Hilma: Because we’re in the midst of another massive psychotechnological transformation, but we’re largely blind to it. The same process that happened with literacy and numeracy is now happening with digital media, algorithmic curation, and artificial intelligence.
Consider how smartphone and social media use is changing attention, memory, and social cognition. Preliminary research indicates that people who rely heavily on AI assistants for cognitive tasks display weakened brain connectivity patterns and reduced cognitive engagement when they can’t use the AI. This suggests we may be accumulating a form of ‘cognitive debt’—outsourcing mental labor to technology in ways that diminish our capacity to think deeply when technology isn’t available.
Reader: That does sound concerning. But isn’t that just the same worry people had about calculators making us bad at math, or GPS making us bad at navigation? Maybe it’s just... adaptation?
Hilma: That’s the crucial question, isn’t it? When is offloading cognitive work to technology a form of beneficial tool use, and when is it a form of cognitive atrophy? The answer isn’t simple, because psychotechnologies are always double-edged.
One of the problems is that we’ve developed incredibly powerful psychotechnologies for agency without developing corresponding psychotechnologies for wisdom. We have technologies that can capture attention, modify behavior, and shape belief at scale. What we lack are equally powerful technologies for cultivating discernment, developing ethical judgment, and maintaining meaningful connection.
This brings us to another crucial thinker: Ivan Illich, whose work on “tools for conviviality” offers a framework for distinguishing technologies that empower from technologies that dominate.
Reader: Tools for conviviality? That’s a lovely phrase. What does it mean?
Hilma: Illich distinguished between tools that remain under human control and those that come to control humans. Tools are convivial, he argued, when they can be easily used by anybody, as often or as seldom as desired, for accomplishing purposes chosen by the user. Convivial tools don’t require previous certification, don’t impose obligations, and allow users to express their own meaning in action. The telephone is Illich’s example—anybody can dial anybody, and no bureaucrat can define what people say to each other.
Convivial tools foster autonomous and creative intercourse among persons, and between persons and their environment. They enhance individual freedom realized in personal interdependence. Non-convivial or industrial tools, by contrast, tend towards monopoly—not just market monopoly, but what Illich called “radical monopoly,” where one type of product comes to dominate, excluding nonindustrial alternatives.
Reader: Can you give me an example of radical monopoly?
Hilma: Cars monopolizing traffic is Illich’s paradigmatic example. Cars shape cities into their image, ruling out walking or cycling in Los Angeles. They eliminate river traffic in Thailand. Once vehicular velocity exceeds bicycle speeds anywhere in a system, total per-capita time spent serving the travel industry increases. The higher the speed, the more time is consumed—by commuting, by paying for the system, by dealing with accidents. People spend more time moving and have less freedom of movement.
Reader: Wait, that seems counterintuitive. Faster travel takes more time?
Hilma: We have to account for the entire system, not just the movement itself. Time working to pay for the car and road system. Time stuck in traffic. Time dealing with accidents and maintenance. Time spent further apart because high-speed infrastructure spreads things out. When you add it all up, the average person in a car-dependent society can spend more time on transportation—not less—than someone in a bicycle-based society, despite moving at higher speeds.
This is what Illich meant by radical monopoly—cars don’t just outcompete other transport modes in the market, they reshape space itself to make alternatives impossible. The same pattern plays out with schools monopolizing learning, hospitals monopolizing healing, and increasingly, digital platforms monopolizing social connection.
Reader: And Illich was writing about this in the 1970s, right? Before the internet, before smartphones, before social media?
Hilma: Yes, which makes his framework particularly interesting to revisit. Illich proposed that beyond certain thresholds, tools can turn against their ostensible purposes. Schools beyond a certain size and complexity might frustrate learning rather than enabling it. Medical systems beyond a certain level of sophistication might produce iatrogenic illness—harm caused by treatment itself. He suggested that major institutions pass through two watersheds: first, where they become genuinely useful, and second, where they risk becoming counterproductive.
Reader: Two watersheds? Can you explain that more clearly?
Hilma: Consider medicine as Illich analyzed it. Around 1913 marks what he called the first watershed—when modern medicine began to reliably help more than harm. For the first time, doctors could measure their effectiveness against scientific standards. But by the mid-1950s, Illich argued, medicine was approaching a second watershed. The system began creating new problems—dependency on professional intervention, medicalization of normal life experiences like pregnancy and aging, and iatrogenic effects where medical treatment itself became a significant source of illness.
The pattern he identified goes like this: institutions first become genuinely useful, solving real problems. Then they expand beyond their optimal effectiveness. Finally, some pass a point where their marginal effects may become negative—where growth creates as many problems as it solves, but the institution has become so embedded that alternatives seem unthinkable.
Reader: That’s an ambitious critique. Are you saying all modern institutions have passed this point?
Hilma: I’m certainly not claiming that, and I don’t think Illich was quite that absolute either. What’s worth considering is whether some of our major institutions show signs of diminishing returns or counterproductive effects at their current scale. Schools that sometimes seem to impede curiosity. Healthcare systems that occasionally prioritize billing codes over healing. Transportation infrastructure that can reduce rather than enhance meaningful mobility. Communication technologies that paradoxically leave some people feeling more isolated.
This isn’t about malevolent actors—it’s a structural question. As tools become more powerful and complex, they tend to concentrate control in fewer hands, often for practical reasons. Operating a crane requires more training than using a wheelbarrow. Managing a hospital requires different skills than tending to a sick neighbour. As tools become more efficient at their core function, they often require more resources to maintain and more expertise to operate. Those who control these complex tools can claim greater privileges, justified by their specialized knowledge and presumed greater productivity.
Reader: So this is about inequality and power, not just about tools and technology?
Hilma: It’s about how tools create and enforce inequality. This is where Langdon Winner’s work becomes essential. Winner argued that in an age of high technology, choices about the kinds of technical systems we build and use are actually choices about who we want to be and what kind of world we want to create—technical decisions are political decisions, involving profound choices about power, liberty, order, and justice.
Reader: You’re saying technology isn’t politically neutral?
Hilma: Exactly. And more—technologies often have politics built into them. Winner’s famous (if contested) example is the bridges on Long Island parkways designed by Robert Moses, built deliberately low to prevent buses (and thus poor people and minorities who relied on public transit) from accessing Jones Beach. The bridges themselves encoded and enforced a particular social order.
A political lens on technology invites very different kinds of questions. Rather than simply “should we accelerate or pause,” we start to ask: “What scale of tools can remain under meaningful human control? What purposes should technology serve? Who gets to decide?” These aren’t technical questions with technical answers—they’re political and ethical questions that require collective deliberation and choice.
Reader: You keep hinting at a metamodern approach to technological design, but not quite spelling it out. Are we going to get to what that actually looks like?
Hilma: We’re almost there. We need just one more foundational piece: an understanding of metamodernism itself as a cultural sensibility and philosophical stance. That’s what we’ll explore in the next chapter.
For now, I want you to sit with this: technology isn’t politically neutral. It’s not just tools we use—it’s infrastructure that determines whose voice gets amplified, whose labour gets valued, whose ways of life remain viable. Every smartphone encodes a cascade of prior choices: about privacy, about attention, about what kinds of connection matter. Every algorithm is legislation that never went through parliament.
The question isn’t whether to have technology, but what kinds of technology we want, structured in what ways, serving what purposes, under whose control. That’s an inherently political and ethical question, not a technical one. And it’s that recognition—that technology is always already political, always already shaping human possibility—that opens the space for a genuinely metamodern approach to our technological future.


