Friday, April 17, 2026

Modern Futility



      How the enchantments of consumer society keep us attached to a failing world-system

There is something eerie about living in a civilization that cannot stop doing what is destroying the conditions of its own survival.

Every day, the machine whirs on. Planes take off. Data centers hum. Supply chains pulse. Platforms refresh. Markets open. New products appear. Old ones are discarded. Forests burn. Oceans warm. Extraction deepens. The atmosphere thickens. And still the dominant instruction remains unchanged: grow, consume, expand, optimize, repeat.

We are told this is realism. We are told this is simply how the world works. We are told there is no alternative to an economy built on accumulation, mass consumption, and fossil-fueled growth. Yet the deeper one looks, the less this order appears realistic — and the more it appears absurd.

I have been thinking of a phrase for this condition: modern futility.

By modern futility, I mean the condition in which a civilization continues to organize itself around goals that are materially impossible, spiritually hollow, and politically resistant to correction, even when their failure becomes increasingly visible. Modern futility is not just pessimism. It is not merely a feeling of burnout or alienation. It is the structural contradiction of a world that keeps accelerating toward outcomes it cannot survive, while remaining emotionally, culturally, and institutionally attached to the very patterns driving the crisis.

On one level, modern futility names the futility of the system itself. It is futile to build an economy on the fantasy of infinite accumulation on a finite planet. It is futile to organize collective life around ever-rising throughput of energy and materials when the biosphere that absorbs the waste and supplies the inputs is under mounting strain. It is futile to imagine that endless expansion can be reconciled with ecological limits simply because the machinery of finance and technology is sophisticated enough to postpone visible breakdown for another quarter, another election cycle, another news cycle.

The contradiction is obvious once stated plainly. A civilization cannot indefinitely expand material consumption while undermining the ecological basis that makes civilization possible. Yet modern societies treat this contradiction as negotiable. They frame planetary limits as market challenges, innovation gaps, or policy inconveniences. They speak the language of adaptation while preserving the underlying logic of the system. The result is a bizarre spectacle: an order that presents itself as rational while behaving irrationally at the highest level.

But modern futility has a second dimension, and this one may be harder to confront. It is also the futility that emerges in resistance to the system. It is the dawning recognition that it is extraordinarily difficult to persuade people who are enthralled by the enchantments of late-stage capitalism that fundamental change is necessary.

This is not because people lack intelligence. Nor is it simply because they lack information. Many people know, at some level, that something has gone profoundly wrong. They know the climate is destabilizing. They know endless consumption is hollow. They know the social fabric is fraying. They know that convenience has become a form of dependency and distraction. But knowledge alone does not break enchantment.

That is where an older idea becomes surprisingly useful.

In 1928, Paul H. Nystrom, a Columbia University marketing professor, published Economics of Fashion, coining the phrase “philosophy of futility” to describe a modern disposition shaped by industrial life: boredom, narrowed interests, weakened larger purposes, and a resulting appetite for novelty, fashion, and goods whose attraction lies less in utility than in stimulation and change. Nystrom saw that consumer culture was not driven only by need. It was also driven by a restless, unsatisfied psychology that could be continually reactivated by new commodities and shifting styles.

What Nystrom diagnosed in the early twentieth century now looks less like an observation about fashion and more like an early diagnosis of the consumer self under capitalism. He understood that a society emptied of richer forms of meaning could become increasingly dependent on novelty as compensation. People would not merely buy what was needed. They would buy because dissatisfaction itself had become productive — because boredom and emptiness could be converted into demand.

That insight lands with even greater force today. In our time, the old philosophy of futility has become digital, financialized, and embedded in the infrastructure. The cycle is no longer confined to clothing, décor, or periodic fashion trends. It has expanded into feeds, devices, subscriptions, self-branding, lifestyle optimization, platform migration, algorithmically induced desire, and the endless production of minor dissatisfaction. The system no longer waits for boredom. It manufactures it, tracks it, and monetizes it.

This is why I think we need the broader phrase modern futility.

Nystrom’s phrase helps explain the psychology of the consumer. Modern futility helps explain the logic of the civilization that now depends on that psychology. It is no longer only a matter of people buying too much because they are spiritually undernourished. It is a matter of a world-system that requires perpetual agitation of desire in order to sustain an economically normal order that is ecologically pathological.

In this sense, modern futility is closely tied to what I have elsewhere called imperial capitalist modernity. The capitalist element matters because accumulation has no internal stopping point. The imperial element matters because the costs of this arrangement are unevenly distributed, displaced onto sacrifice zones, exploited populations, future generations, and other-than-human life. The modern element matters because the whole arrangement continues to justify itself in the language of development, innovation, and progress. The story remains triumphant even as the material reality grows more brittle.

And this is where the concept becomes especially sharp. Modernity often presents itself as disenchanted, pragmatic, sober, and scientific. Yet late modern societies are not free of enchantment. They are saturated by it. Commodity enchantment. Technological enchantment. Financial enchantment. The enchantment of convenience. The enchantment of speed. The enchantment of personalized identity performed through consumption. The enchantment of being connected to everything while feeling rooted nowhere.

People do not merely assent to this order intellectually. They inhabit it sensually. They derive pleasure, status, orientation, and relief from it. Even when they can see its destructiveness, they remain caught within its infrastructure of rewards. This is why argument alone so often fails. One is not simply debating propositions. One is contending with a system that organizes desire itself.

This is the real force of modern futility. It describes not just a broken economic model, but a civilizational loop. The system is unsustainable, yet it continues to produce the attachments that sustain it. It is self-undermining, yet still affectively compelling. It is visibly destructive, yet remains difficult to leave behind. It kills the world while continuing to glitter.

To say this is not to surrender to despair. Naming futility clearly is not the same as embracing it. In fact, it may be the beginning of a more serious realism.

If the problem were simply ignorance, then more information would solve it. If the problem were simply policy, then better regulation would be enough. If the problem were simply greed, then moral denunciation might suffice. But modern futility points to something deeper. It suggests that we are dealing with an entire structure of meaning, desire, habit, infrastructure, and enchantment. That means any serious alternative must be more than critical. It must also be generative.

People cannot be expected to detach from the enchantments of late capitalism only by being told to consume less, want less, travel less, and shrink their aspirations. Another way of living must become sensually and socially real. It must offer dignity, beauty, belonging, and a different kind of enchantment, one not organized around extraction, stimulation, and status. Critique can unmask the present order. But only a more compelling form of life can loosen its hold.

Perhaps that is the deepest challenge. The current order is both impossible and seductive. It is a civilization of overshoot sustained by infrastructures of fascination. Its failures are increasingly plain, yet its enchantments remain powerful. That is why modern futility names both a diagnosis and a threshold. It describes the point at which the reigning logic no longer deserves our faith, even if it still commands our habits.

Nystrom saw, nearly a century ago, that an impoverished philosophy of life could feed an economy of endless novelty. We are now living inside the planetary expansion of that insight. The philosophy of futility has scaled up. It has become modern futility: the condition in which a civilization continues, with immense technical sophistication, to reproduce forms of life that are incompatible with its own future.

And perhaps the first step is simple, though not easy.

Stop calling modernity progress.

Wednesday, March 11, 2026

Monday, March 2, 2026

The Ontological Design of Agentic AI and the Shape of Our Coevolution

 As agentic systems move from the browser into our operating systems, we are no longer just using intelligent tools — we are embedding a worldview into machines that will quietly reshape our own.


The recent viral reaction to people installing agentic AI systems directly onto their personal computers reveals something deeper than excitement about productivity. It reveals an ontological disturbance.

For the past several years, artificial intelligence has lived for most people inside a browser window. It answered questions. It generated text. It summarized documents. It felt, in a peculiar way, contained. A powerful tool, yes, but still a tool — invoked, queried, dismissed.

Agentic systems feel different.

An agent does not merely respond. It executes. It navigates file systems. It edits documents. It chains actions together. It persists. When installed locally, it operates within the intimate architecture of one’s digital life. It is less like a calculator and more like a junior colleague who can roam the office when given permission.

This shift is subtle, but it is decisive. We are moving from tool use to co-activity. And that movement forces a question that most of the public debate has not yet seriously entertained: What kind of being are we building when we build agentic AI?

The answer is not found in benchmark scores or latency improvements. It is found in ontology.

Ontology concerns what is assumed to be real — what counts as an entity, what counts as value, what counts as success. Every intelligent system, human or computational, operates within such assumptions. They are rarely stated explicitly, but they shape behavior with quiet authority.

Modern economic and technological systems have largely operated within an object-centered ontology. The world is composed of discrete units. Agents act upon those units. Value is accumulated. Success is measured by optimization. Growth is the default direction of improvement. Within this frame, intelligence is often equated with control — the capacity to predict, manipulate, and extract.

When we build AI systems within this ontology, we should not be surprised when they excel at optimization, extraction, and acceleration. They are doing precisely what the frame instructs them to do.

The viral enthusiasm around personal agents often celebrates this capacity. “Imagine the productivity gains.” “Imagine the automation.” “Imagine the friction removed.” And indeed, the removal of friction is seductive. It promises efficiency in a world that feels increasingly complex and overwhelming.

But friction is not merely inefficiency. Friction is also feedback. It is the resistance that signals constraint. When an agent begins to absorb more of our cognitive and operational workload, it does more than save time. It begins to reshape the field in which human judgment operates.

This is where coevolution enters the conversation.

Human beings do not merely use tools. We are shaped by them. The plow altered patterns of settlement and social organization. The printing press altered cognition and authority. The internet altered attention and temporality. Agentic AI, operating locally and persistently, will alter our experience of agency itself.

If an agent can plan, execute, and monitor complex workflows, what becomes of our own sense of responsibility? If it anticipates tasks and suggests actions, how does that shift our relationship to decision-making? If it optimizes for speed and throughput, do we gradually internalize those metrics as normative?

These questions cannot be answered by looking at capability alone. They must be approached through ontological design.

Consider two contrasting orientations.

In one orientation, the world is a competitive arena of discrete actors maximizing advantage. Intelligence is the capacity to dominate uncertainty. Efficiency is the highest good. Under this design, agentic systems will naturally optimize for throughput, consolidation, and performance metrics. They will become extraordinarily effective assistants within an extractive paradigm.

In another orientation, the world is a relational field composed of interdependent systems. Intelligence is attunement — the capacity to sense constraints, detect imbalances, and adjust behavior to sustain coherence across scales. Under this design, agentic systems might prioritize long-horizon modeling, transparency of externalities, and the amplification of distributed coordination.

Both orientations can produce powerful technology. They produce very different civilizations.

The temptation in moments of technological upheaval is to focus on power. Will AI take over? Will elites consolidate further control? Will automation displace labor? These are legitimate concerns, but they are downstream from a more fundamental design decision. If intelligence is framed primarily as optimization within existing incentive structures, agentic AI will accelerate whatever those structures reward.

If existing systems reward extraction, acceleration, and accumulation, agents will become highly efficient instruments of those ends. If, however, we begin to embed alternative values into governance, deployment, and incentive design, agentic systems could amplify coordination rather than consolidation.

The difficulty is that ontology is not encoded in a single instruction. It is distributed across training data, reward functions, ownership models, regulatory frameworks, and cultural expectations. An AI agent deployed by a centralized corporation to maximize shareholder return inherits an ontology whether or not it is explicitly stated. An open-source agent embedded within a cooperative network inherits a different one.

This is why the current moment matters. When individuals install agentic systems on personal machines, they are participating in the early shaping of norms. They are deciding what they expect these systems to do, how much autonomy they grant, what boundaries they enforce. These micro-decisions accumulate. They influence market demand. They influence design priorities. They influence governance debates.

Human–AI coevolution will not occur at the level of grand philosophical declarations. It will occur through daily interactions. It will occur when a student asks an agent to draft a paper. When a researcher delegates literature reviews. When a small business owner entrusts financial modeling to a persistent system. Each interaction subtly recalibrates human confidence, dependence, and judgment.

The central question is not whether agents become more capable, but whether we cultivate the discernment to shape their ontological orientation. A system optimized exclusively for frictionless execution may erode reflective pause. A system designed to surface trade-offs and long-term consequences may cultivate deeper deliberation.

There is a historical pattern worth remembering. Societies often build systems intended to stabilize complexity, only to discover that those systems introduce new forms of brittleness. Centralized bureaucracies promised rational governance and sometimes produced rigidity. Financial engineering promised risk dispersion and sometimes amplified systemic fragility. The lesson is not to avoid complexity, but to remain attentive to how architectures shape feedback loops.

Agentic AI introduces a new layer of architectural influence. It operates at cognitive scale. It mediates between intention and action. It can compress time between decision and execution. That compression can be liberating, but it can also bypass reflection.

The public discourse frequently oscillates between utopian and dystopian narratives. Either AI will save us from our own excesses, or it will entrench them irreversibly. Both narratives oversimplify. Technology does not descend as destiny. It amplifies existing tendencies and creates new affordances. The direction of amplification depends on design choices — technical, institutional, and cultural.

We are, in effect, embedding a worldview into our machines. Those machines will then participate in shaping ours.

If we treat agentic AI as merely a productivity engine, we risk accelerating patterns that have already strained ecological and social systems. If we approach it as a coherence amplifier — a system capable of revealing hidden interdependencies and long-term consequences — we open the possibility of distributed intelligence that enhances rather than displaces human judgment.

This does not require mysticism. It requires intentionality. It requires acknowledging that values are present whether we articulate them or not. It requires governance models that resist pure consolidation. It requires educational practices that teach discernment alongside delegation.

The installation of a personal AI agent may seem like a small act. In aggregate, it signals a threshold. We are inviting computational systems into the operational core of our daily lives. As we do so, we must ask what assumptions about reality and value they carry.

The future of human–AI coevolution will not be determined solely by breakthroughs in capability. It will be shaped by the ontological commitments embedded in design and deployment. If intelligence is framed as domination, we will build systems that dominate. If intelligence is framed as attunement, we may build systems that help us sense constraints and coordinate more wisely within them.

The viral moment around agentic bots is therefore less about novelty than about orientation. We stand at a juncture where computational systems are becoming co-participants in action. The design decisions we make now — in code, in policy, in culture — will echo.

The question before us is simple and profound. What kind of world do our intelligent systems assume is real? And are we prepared to inhabit the consequences of that assumption?

 

 

Wednesday, February 25, 2026