Monday, December 29, 2025

Agency Without Control

                                    Rethinking the Self in an Age of Distributed Intelligence


Most people I speak with today share a quiet, recurring discomfort. It appears when they work with artificial intelligence, when they collaborate inside fast-moving teams, when they try to make sense of ecological crises that refuse simple solutions. The feeling is not panic. It is not fear. It is something subtler: the sense that one ought to be in control—and isn’t.

We reach for familiar strategies. We try to improve our prompts, sharpen our skills, optimize our workflows. We assume that with enough mastery, the system will once again behave. And when it doesn’t, the failure feels personal, as if we are falling short of a role we are supposed to play.

But what if the discomfort isn’t a skill issue at all?

What if it is a metaphoric mismatch?

Modern life trained us to experience ourselves as autonomous individuals acting upon a world of tools, resources, and problems. We learned to locate agency inside the self and to treat the surrounding environment as something to be managed, controlled, or overcome. For a long time, this image worked. It aligned with relatively stable institutions, slow feedback loops, and technologies that extended human effort without fundamentally reshaping human cognition.

Today, that alignment is breaking down.

Artificial intelligence does not behave like a tool in the traditional sense. Ecological systems do not respond to command and control. Collective intelligence does not move in straight lines. Yet we continue to approach these domains as if the self remains a sovereign actor standing outside the system, issuing instructions from a position of oversight.

The resulting friction is often interpreted as anxiety about technology or uncertainty about the future. I think it runs deeper than that. I think it arises because the metaphoric structure through which we experience agency—who we believe ourselves to be in relation to the world—no longer fits the environments we inhabit.

Before we ask how to use AI well, or how to coordinate action in complex systems, we may need to ask a more fundamental question: what kind of self do these environments require?

That question does not point toward better techniques or stronger willpower. It points toward a quieter, more unsettling shift: a change in how we imagine the concept of the self.

The modern conception of the self did not arise by accident. It emerged alongside a particular world—one shaped by industrial production, scientific rationalism, bureaucratic institutions, and technologies that amplified human effort without dissolving human boundaries. In that world, the individual made sense as a discrete unit of agency: a thinking subject who possessed skills, made decisions, and acted upon an external environment.

This self was imagined as bounded. Cognition happened inside the head. Responsibility resided inside the person. Tools were inert extensions, subordinate to human intention. The world, though complex, was assumed to be ultimately legible and governable through analysis, planning, and control.

Within those conditions, autonomy was not an illusion—it was an achievement.

The modern self learned to specialize, to master domains, to optimize performance. It learned to separate means from ends, facts from values, subject from object. It cultivated a posture of distance: stepping back from the world in order to understand it and understanding it in order to act effectively upon it.

This posture worked remarkably well. It powered scientific discovery, technological innovation, and unprecedented material abundance. It supported stable careers, professional identities, and coherent life narratives. Cause and effect were slow enough to track. Systems were bounded enough to manage. Expertise could accumulate without immediately destabilizing the environment that produced it.

Crucially, the modern self did not experience itself as lonely or alienated by default. On the contrary, it experienced competence. To act autonomously was to be effective. To be effective was to matter.

The problem, then, is not that the modern self was misguided. The problem is that it was ecologically tuned to a world that no longer exists.

As feedback loops accelerated, as cognition began to spill into networks and machines, as agency became distributed across systems no single actor could fully oversee, the assumptions that once grounded autonomy quietly eroded. Yet the image of the traditional self remained intact. We continued to expect command where only coordination was possible. We continued to seek control where responsiveness was required.

What once felt like strength began to feel like strain.

The modern self, trained to stand apart and act upon the world, increasingly finds itself embedded within processes it cannot step outside of—systems that respond, adapt, and evolve faster than individual intention can track. And because the self has not yet been reimagined, this mismatch is often experienced as personal inadequacy rather than ontological lag.

We try harder. We optimize further. We double down on mastery. But the ground beneath the metaphor has already shifted.

As the limits of the modern self become harder to ignore, a new metaphor has begun to circulate—especially in creative, intellectual, and AI-mediated work. It is the metaphor of the conductor.

In this image, the individual is no longer the sole producer of outcomes. The conductor does not generate sound. The musicians do. The intelligence of the system lies not in execution but in coordination—in timing, pacing, emphasis, and attunement to the whole. Authority becomes lighter. Mastery becomes relational rather than possessive.

It is an appealing metaphor, and for good reason.

The conductor loosens the grip of heroic individualism without abandoning agency altogether. It acknowledges distributed contribution while preserving coherence and meaning. It reassures us that there is still a role for human judgment, taste, and responsibility—even as the complexity of the system increases.

In many contexts, this metaphor is a genuine improvement. It reflects how people increasingly experience creative collaboration, including work with AI: less as issuing commands to a tool, more as shaping conditions under which something coherent can emerge. The conductor listens as much as they lead. They respond as much as they direct.

And yet, for all its sophistication, the conductor metaphor quietly preserves a familiar architecture.

The conductor still stands outside the orchestra.
They retain a privileged vantage point.
They oversee a bounded system governed by a score, a tempo, and a shared frame of reference.

Coherence, in this image, is still something that can be imposed from above—if not forcefully, then skillfully.

This is where the metaphor begins to strain.

The environments we now inhabit—ecological, technological, cognitive—do not resemble orchestras. There is no fixed score. No stable tempo. No clear boundary between performers and instruments. Feedback loops are fast, recursive, and often opaque. Agency is distributed not only across people, but across machines, infrastructures, and environments that respond in ways no single participant fully controls or understands.

In such conditions, there is no place to stand outside the system.

This is the point at which a deeper shift becomes necessary—not just in how we coordinate action, but in how we conceive of the self itself.

The ecological or 4E conception of self—embodied, embedded, enactive, extended—offers a different starting point. Rather than imagining the self as an autonomous agent or even as a coordinating authority, it understands the self as a participant in ongoing processes of sense-making that unfold across bodies, tools, environments, and social fields.

From this perspective, cognition does not reside solely in the head. It arises through interaction. Agency is not something the self possesses and deploys; it is something that emerges through engagement with a landscape of affordances. Action is not primarily about issuing decisions, but about responding skillfully to changing conditions.

The self, in this frame, is less a conductor and more a node—a site of sensitivity within a distributed network. What distinguishes one node from another is not authority or control, but attunement: the capacity to register shifts in the field and to adjust in ways that allow coherence to propagate.

This is a more difficult metaphor for modern minds to inhabit. It offers no overview, no command position, no guarantee of narrative centrality. And yet it more accurately reflects how intelligence already operates in complex systems—biological, ecological, and increasingly technological.

Seen this way, the task is no longer to coordinate the system from above, but to learn how to participate well within it. Not to impose order, but to sense emerging patterns. Not to control outcomes, but to move in phase with forces that exceed any single point of view.

What feels like a loss of agency from the standpoint of the modern self begins to look like a different kind of agency altogether—one grounded not in mastery, but in relationship.

If the ecological self is not a conductor, a natural question follows: how does coordination happen at all? If no one stands outside the system, if agency is distributed and situational, what accounts for moments of alignment, direction, or shared movement?

One way to answer this is through the notion of affordance attractors.

An affordance attractor is not a rule, a command, or a plan. It is a pattern in the landscape of possibilities that makes certain actions more likely, more stable, or more resonant than others. Rather than telling agents what to do, it reshapes what can be done with relative ease. It tilts the field.

Affordance attractors operate quietly. They do not announce themselves. They are sensed rather than interpreted. When people find themselves moving together without having agreed on a strategy, when conversations suddenly flow, when collaboration “clicks,” it is often because participants have entered the same affordance basin. Action becomes coordinated not through control, but through shared responsiveness to the same gradient.

From this perspective, coherence does not need to be imposed. It emerges when multiple nodes become sensitive to the same attractor and adjust accordingly. No one leads. No one follows. Movement happens because the terrain itself has changed.

This helps explain why the ecological self does not experience agency as choice alone. Agency feels more like navigation: the ability to register subtle shifts in the environment and to move in ways that remain viable as conditions evolve. Skill lies not in prediction, but in attunement. Intelligence lies not in command, but in timing.

Seen this way, the growing discomfort many people feel in complex systems takes on a different meaning. It is not evidence of inadequacy or loss of control. It is a signal that an older metaphor of selfhood is being stretched beyond its ecological fit.

The conductor metaphor marks an important transition away from heroic individualism. But it still imagines coherence as something overseen. The ecological self lets go of oversight altogether. It accepts that there is no external vantage point from which the whole can be grasped. What remains is participation—partial, situated, responsive.

Living as a node in a distributed network does not mean disappearing into the system. It means understanding influence as relational rather than sovereign, and responsibility as attentiveness rather than command. It means acting in ways that deepen coherence where possible and reduce harm where alignment fails.

This is not a call to abandon agency, but to reimagine it. Not as control over outcomes, but as the capacity to sense affordances and move with them skillfully.

In a world shaped by accelerating feedback loops, ecological instability, and increasingly non-human forms of intelligence, this shift is no longer optional. The question is not whether the modern self will be replaced, but whether we can learn—gradually, imperfectly—to inhabit a different one.

Not the conductor of the orchestra.

But a participant in the music.

 

Friday, December 26, 2025

Navigating the Affordance Landscape

                              Creativity, Selfhood, and Agency in the Age of Extended AI.


We are living through a period of change that is not merely technological but topological. The ground beneath our habits, identities, and expectations is shifting—not once, but continuously. Tools no longer arrive as discrete instruments to be mastered and set aside; they arrive as living systems that reshape the conditions of action themselves. In this context, many of our inherited metaphors—career ladders, skill acquisition, tool mastery, productivity—begin to fail us. They assume a stable terrain. We no longer inhabit one.

A more fitting metaphor for this moment is that of an affordance landscape: a dynamic field of possibilities shaped by the interaction between agents, environments, and technologies. What matters in such a landscape is not control, nor even expertise in the traditional sense, but attunement—the capacity to perceive emerging possibilities and move with them.

Nowhere is this more apparent than in the experience of working with extended AI systems.

From Tools to Terrain

In the analog and early digital worlds, creativity was inseparable from friction. Progress required time, repetition, apprenticeship, and the slow accumulation of procedural knowledge. Mastery conferred authority precisely because it was difficult to obtain. Effort functioned as both a gatekeeper and a moral signal: if something took a long time to learn, it deserved respect.

AI-mediated systems disrupt this logic at a foundational level.

When an image can be improved, a design refined, or a complex workflow executed in minutes—often with results that exceed prior efforts—the relationship between effort and outcome is severed. This is deeply unsettling for those whose sense of self and value is anchored in procedural mastery. But it is also revelatory. It exposes something that was always true but easy to ignore: much of what we called “skill” was not essence, but interface negotiation.

The shift from tools to terrain matters. Tools are things we use. Terrain is something we move within. AI no longer behaves like a passive instrument; it reshapes the space of possible actions. The relevant question is no longer “How do I master this tool?” but “What does this landscape now make possible for someone like me?”

That question is inherently relational.

The End of the Autonomous Self (Quietly)

Modernity trained us to imagine the self as autonomous, bounded, and self-sufficient. Intelligence was presumed to reside inside the individual, with tools acting as external amplifiers. This model worked—up to a point. But it came with hidden costs: exhaustion, identity rigidity, and the constant pressure to keep up as complexity increased.

Extended AI systems expose the limits of this model.

When intelligence becomes distributed across humans, machines, datasets, and infrastructures, agency is no longer localized. It is orchestrated. Creativity becomes less about execution and more about orientation, judgment, and sense-making. The self shifts from operator to navigator.

This is not a loss of agency. It is a reconfiguration of it.

Those who cling to the autonomous self model often experience AI as threatening or dehumanizing. But for those already experimenting with relational or distributed models of selfhood, AI feels less like replacement and more like resonance. It does not diminish authorship; it relocates it. The human contribution moves upstream—from manipulating pixels and menus to shaping intention, meaning, and coherence.

What becomes scarce is no longer skill, but discernment.

Friction, Time, and Meaning

One of the most profound effects of AI-mediated creativity is the collapse of friction at the operational layer. Tasks that once required hours now take minutes. For some, this feels like a violation of an unspoken ethical contract: meaning was supposed to be earned through effort.

But effort is not meaning. It is merely one historical path to it.

When friction is removed, time does not disappear; it is redistributed. Depth does not vanish; it migrates. The question becomes where that liberated time and energy are reinvested. If speed is used only to produce more, faster, exhaustion returns under a different name. But if speed creates space for reflection, experimentation, and conceptual play, something else becomes possible.

In this sense, AI does not trivialize creativity—it raises the bar. When execution is cheap, coherence matters more. When iteration is instant, direction matters more. When outcomes arrive quickly, the capacity to recognize what is alive, aligned, and worth pursuing becomes decisive.

The affordance landscape rewards those who can sense gradients rather than defend positions.

Winners, Losers, and Misalignment

It is true—and unavoidable—that periods of rapid landscape change produce uneven outcomes. Some people will experience loss: of status, of identity, of hard-won expertise. This is not because they lack talent, but because their talents were cultivated under a different regime of constraints.

Framed through the affordance landscape metaphor, this is not a moral failure but a mismatch. Landscapes do not reward virtue; they reward fit. Anxiety, resentment, and resistance often signal a gap between how one has learned to move and how the terrain now behaves.

Conversely, those who thrive are not necessarily the most technically adept. They are those willing to relinquish procedural sovereignty in exchange for expanded reach. They can tolerate surprise. They can collaborate with systems whose inner workings they do not fully control. They understand that authorship today is less about command and more about curation, steering, and resonance.

In short, they are adaptable selves rather than defended ones.

Aging, Experience, and a Quiet Advantage

There is an irony here worth noting. Those who grew up in analog worlds—who remember the slowness, the labor, the materiality of creation—often feel the rupture most acutely. But that very contrast can become an advantage. Having lived through multiple regimes of friction, they can recognize what has genuinely changed and what has not.

They know that judgment, taste, and meaning were never located in the tools themselves.

For such individuals, AI’s acceleration is not disorienting but exhilarating. It feels like time returned rather than stolen. Energy once spent wrestling interfaces can now be invested in thinking, composing, and world-building. The fascination is not with the machine, but with the newly expanded space of possibility for creative life—especially later in life, when energy is precious and curiosity remains abundant.

This is not nostalgia. It is perspective.

Toward New Metaphors of Agency

The affordance landscape metaphor does important cultural work because it avoids false binaries. It does not ask us to choose between human and machine, mastery and surrender, speed and depth. Instead, it invites us to think in terms of navigation, attunement, and relational agency.

It reminds us that:

  • intelligence is not a possession but a field
  • creativity is not an act but a process of alignment
  • agency is not control but participation

Most importantly, it gives us a way to stay oriented without pretending the ground will stop moving.

In an era where change outpaces adaptation, metaphors matter. They shape what we notice, what we fear, and what we believe is possible. The affordance landscape does not promise stability. It promises legibility. And in a world of extended intelligence, legibility may be the most valuable affordance of all.

The question before us, then, is not whether AI will change the landscape—it already has. The question is whether we will cling to old maps, or learn to sense new contours.

Some will defend the hills they know. Others will begin to explore.

And a few—quietly, experimentally—will start making worlds in the middle of the shift.

Thursday, December 18, 2025

Superpositioned Worlds


How Secular Urban Moderns Can Re-Enter the Pluriverse Through the Metaphors of Science

For decades, scholars of decoloniality and pluriversality have argued that the modern world is not the only world. Other ontologies—ancestral, indigenous, relational, animist—continue to exist alongside the dominant worldview of late-stage capitalism. These world-spaces hold alternative metaphors for living, and they carry different understandings of what it means to be human, to be in relation, to belong to a landscape, or to inhabit time. Pluriversality, at its core, is the recognition that many worlds coexist and that no single metaphoric regime has a monopoly on reality.

But acknowledging this plurality and inhabiting it are two different things. For secular urban moderns—those formed by scientific rationalism, procedural cognition, and the architectural logics of capitalism—the metaphors of the global South rarely land in a way that transforms lived experience. They are appreciated aesthetically, admired ethically, even embraced politically, yet they remain externally located. They do not migrate into the internal architecture where meaning is formed.

This is not a failure of the metaphors themselves. It is a mismatch of modes of sense-making. Modernity has produced individuals whose perceptual receptors are calibrated to scientific explanations, empirical claims, and material structures. Their imaginations have been shaped by physics, computation, networks, and systems models. A metaphor like “the mountain is a person” may resonate emotionally, but it will not reorganize how a secular modern perceives the world. It cannot install itself into their operating worldview because it relies on symbolic grammars they no longer speak.

And yet, if secular moderns are to escape the reduction of late capitalism—which quietly casts them as functional subalterns in a world optimized for extraction and productivity—they, too, must find a way to enter the pluriverse. They need metaphors that destabilize the notion of a single, dominant reality and open a passage back into multiplicity. They need new interpretive tools that permit them to inhabit more than one world at a time, without requiring them to adopt a religious or ancestral cosmology they cannot metabolize.

This is where contemporary science becomes unexpectedly generative.

Quantum mechanics, complexity theory, systems biology, information theory, and topology already describe a reality that is profoundly pluriversal. Their concepts destabilize modernity’s most cherished assumptions. They offer metaphors that secular moderns trust because they emerge from domains that have shaped their cognitive development. And they reveal a universe in which multiple realities coexist, in which relations are ontologically primary, and in which observers are entangled with the worlds they help bring into being.

Superposition is the first metaphor that invites secular moderns into pluriversality. It describes a world where multiple states coexist, layered on top of one another, waiting for interaction to collapse one possibility into a particular expression. As a metaphor, superposition tells us that many realities exist simultaneously—cultural, perceptual, existential—and that our lived world is not singular but selected through participation. It gives modern individuals permission to sense themselves as inhabiting overlapping modes of being, none of which require exclusive allegiance.

Entanglement reveals that relation is not secondary. It is constitutive. Identities, selves, and meanings arise not from isolated individuals but from networks of mutual influence and resonance. Entanglement becomes a secular metaphor for relational ontology, one that requires no spiritual scaffolding yet still conveys the profound interdependence found in indigenous philosophies.

Topology provides a language for describing the shape of experience itself—how worlds are organized, how identities stabilize in attractors, how social and psychological forms bend, fold, or rupture. Topological Awareness Mode (TAM), understood as a secular practice, makes it possible to feel the structure of one’s world and to recognize that different metaphoric regimes produce different experiential landscapes. TAM gives individuals a way to move between those landscapes with skill and discernment.

Resonance offers a path to coherence: the sense that a particular metaphor, practice, narrative, or way of being vibrates in harmony with one’s internal field. It relocates meaning from beliefs to pattern alignment. It allows secular moderns to sense the “rightness” of an experience without requiring them to adopt any metaphysical explanation. In doing so, resonance becomes a bridge between worlds.

What emerges from these scientific metaphors is not a rejection of the metaphors of the South but a complementary pathway. Instead of facing a binary—either adopt Indigenous metaphors or remain locked in modernity’s single ontology—secular moderns gain access to a third option: a way to re-enter the pluriverse through metaphors that match their epistemic temperament.

This matters because metaphors do more than describe reality; they shape it. They tune our perception, structure our agency, and define the range of worlds we believe we can inhabit. When the inherited metaphors of modernity begin to crack—exhausted by precarity, ecological collapse, and the psychic costs of extraction—new metaphors must arise to guide us into the next world.

If pluriversality is the project of expanding the multiplicity of worlds we can inhabit, then the metaphors of science can serve as the secular modern’s entry point. They do not replace the metaphors of the South, nor do they diminish them. Instead, they widen the field of possibility. They help create a pluriverse that is capacious enough to hold many ways of being, including those whose imaginations were shaped not by ancestor stories but by physics labs, mathematics classrooms, and the invisible architectures of the information age.

In this sense, adopting scientific metaphors is not an escape from modernity but a way of completing its arc—transcending the narrow, one-world worldview it inherited from industrial capitalism and stepping into a reality where many worlds coexist, each with its own coherence, its own resonance, and its own pathways of meaning.

Superposition, in this context, is more than a metaphor. It is the cognitive gateway through which secular moderns can rediscover plurality, sense multiple realities, and reclaim the freedom to build lives that do not collapse into a single predetermined world. It is how they begin to re-enter the pluriverse—not as tourists, not as imitators, but as world-builders in their own right.

Thursday, December 4, 2025

How We Lost the Plot: What Happens When a Society Loses Its Shared Story — and Its Sense of the Real

 


We are living through a transformation so deep it cannot be captured by economics alone—one that is reshaping not just our livelihoods, but our very sense of what is real.

For years, we have been told that the middle class is shrinking because of technology, globalization, or a temporary mismatch between skills and opportunity. But I’ve come to believe this explanation is too small for the scale of the transformation underway. Something deeper—structural, historical, and ontological—is happening beneath the surface.

The rupture began, I think, with the fall of the Berlin Wall.

For all its brutality and distortion, the Cold War locked elites and masses into a shared geopolitical project. The “people”—industrial workers, clerical labor, students, soldiers—were not merely economic units. They were strategic resources held in reserve. Welfare systems, pensions, public education, accessible healthcare—these were more than social goods; they were stabilizing investments in a mobilizable population.

With the collapse of the Soviet Union, that incentive evaporated. The need for mass participation waned. Wars no longer required millions of bodies. Global markets fractured the old national economies. Capital gained the ability to pool talent, labor, and consumption anywhere, anytime.

And then, almost imperceptibly, the bottom 90 percent began to be reclassified.

The shift wasn’t announced. There was no proclamation that citizenship had thinned into a legal fiction. But the effect was unmistakable: the majority were transformed from rights-bearing citizens into something closer to functional subalterns—their value measured less by participation in a social contract than by their capacity for extraction. What the colonized experienced under imperial capitalism—precarity, disposability, a structural inability to speak their world into legitimacy—is now resurfacing inside the very nations that once exported it.

Globalization, financialization, and the reach of ICT technologies didn’t simply disrupt industries; they dissolved the underlying logic of the post-war settlement. The middle class was not just an income bracket. It was a historical artifact, sustained by an implicit agreement: your stability for our stability.

When that agreement lost its geopolitical utility, it began to fray. Today, millions find themselves in a condition that would be familiar to the forgotten and ignored peoples of colonial history: living in survival mode, unable to secure housing, healthcare, education, or even a coherent narrative about their place in the world. The core features of the colonial condition—disposability, marginalization, ontological erasure—have quietly migrated into the heart of advanced democracies.

This is where the ontological dimension becomes unavoidable.

When a population is structurally downgraded, the first thing that collapses is not material well-being, but epistemic standing. People lose the right to define what is real. Their experiences are pathologized; their struggles reframed as personal failure; their intuitions dismissed as irrational. They are spoken about but not with. They occupy the position Gayatri Spivak famously articulated: the subaltern cannot speak—not because they are silent, but because the dominant structure cannot hear them.

The newly precarious majority now inhabits that same position. They are feeling the early symptoms of ontological displacement: mistrust in institutions, attraction to unconventional imaginaries, and the search for alternative ways of making sense of a world that no longer reflects them back to themselves.

People are losing the ability to locate themselves within the story of their own society. They feel the decoherence before they can name it. And without a shared ontology, the old narratives collapse.

But this structural demotion couldn’t succeed on economic grounds alone. It required a second, equally powerful process: the ontological occupation of public reality.

This is where the modern nation-state reveals its updated function. In earlier eras, the state told a story of shared destiny and upward mobility. Today, its narrative machinery operates differently. Rather than generating cohesion, it maintains ontological containment. The purpose is no longer to unify the public around a common project but to limit the bandwidth through which alternative realities can be articulated, circulated, and taken seriously. The state does not need to own the media. It only needs to shape the frame within which media operates.

This is an organizational lock on the imagination.

Through narrative saturation, regulatory pressure, and alignment with capital, the media ensures that the public sphere remains narrow, individualized, and emotionally charged but politically impotent. The effect is subtle but decisive. People do not simply lose access to material stability; they lose the legitimacy of their own worldview. Their ability to describe the world in terms that make sense to them is delegitimized before it can become politically actionable.

The domestic precariat now occupies the position of the colonized subaltern. Their economic hardship is compounded by ontological displacement: a sense that the world is no longer coherently narratable from their point of view. They feel the incoherence before they can name it. And in that epistemic void, the old social contract dissolves.

Under these conditions, alternative ontologies—mutualism, decoloniality, ecological relationality, local sovereignty, new forms of consciousness—are perceived as threats, not because they are dangerous in themselves, but because they expose the narrowness of the dominant frame. They reveal that the ontological perimeter around “reality” is politically maintained. They show that the terrain of possibility is larger than the story we are being told.

Yet this ontological occupation is not as stable as it appears.

People sense the fracture. Precarity sharpens perception. Climate destabilization amplifies ontological dissonance. AI enables individuals to engage in accelerated autodidactic exploration, bypassing traditional gatekeepers. The cracks are widening. New world-making efforts are emerging from the margins—small coherence clusters in a landscape that is otherwise fragmenting.

Perhaps that is the deeper task now—not to restore the old social contract, which belonged to a geopolitical era that no longer exists, but to cultivate alternative ontologies capable of grounding life in a rapidly shifting world. To listen with the newly subalternized majority, just as anthropologists once listened with the colonized, and recognize that their struggle is not only material but ontological.

We are living through a reordering of the real. Naming it is the first step. Reimagining it is the work ahead.

 

 

Monday, December 1, 2025

The Quiet Gifts of AI

 


                                    Why the most meaningful benefits are the hardest to notice.

Across the public conversation about AI, fear dominates the emotional landscape. People imagine disruption, displacement, and instability—roles dissolving, workflows collapsing, identities becoming unmoored. These fears are not unreasonable; they reflect genuine decoherence events, moments when the structures that once held our lives together lose stability before new ones have fully formed.

Yet this is only half of the story.

What rarely receives attention are the subtle coherence gains—those quiet, cumulative expansions of clarity, flow, creativity, and agency that become possible when AI is used not to replace human effort but to deepen it. When engaged as a collaborator rather than a threat, AI becomes a coherence technology, a force that restores cognitive harmony in a world increasingly engineered toward distraction and fragmentation.

I have experienced this directly in both my teaching and my multi-media storytelling. The contrast between my pre-AI and post-AI life is not measured in productivity metrics or efficiency curves; it is felt at the ontological level, in the way my days hold together, the way my work aligns with my values, and the way I inhabit my creative identity. This is what the public conversation overlooks: the quiet gifts—the coherence gains—that accumulate when AI is woven thoughtfully into the architecture of one’s life.

The essential question, then, is not whether AI will eliminate jobs. The deeper question is whether AI will help us reorganize our lives toward greater coherence, or whether fear will keep us bound to patterns that are already failing us.

 

The Real Problem Isn’t Job Loss — It’s Decoherence

The anxiety surrounding AI often collapses into a single storyline: the fear that one’s profession may disappear. But beneath that surface-level concern lies something more pervasive—the sense that life itself is losing its structural integrity. Rapid technological change can produce a felt experience of fragmentation, overwhelm, disorientation, and cognitive overload. It is not simply that tasks change; it is that the inner scaffolding that once made those tasks feel meaningful begins to tremble.

What people miss is that AI can also reverse these dynamics. Used well, it can restore alignment at multiple scales—moment-to-moment clarity, long-term flow, narrative cohesion, and relational harmony. To see how this plays out, consider how AI reshaped my teaching practice.

 

Teaching Through the Lens of Coherence

Long before AI entered the picture, I had already gravitated toward the lexical approach to ESL—a pedagogy built on authentic materials, chunking, collocations, noticing, and pragmatics. But the lexical approach demands an immense amount of material. Each lesson requires naturalistic dialogues, contextualized idioms, controlled practice, slow-versus-natural speech contrasts, and tasks that mirror real-life communicative pressure.

Doing this manually took a lot of time and patience. A single high-quality lesson could take hours to construct, which meant that each week I spent close to ten hours in preparation—often compromising on depth simply because time was finite.

AI changed this dynamic entirely.

Instead of wrestling with scarcity, I could now generate original dialogues, adapt authentic media, design tasks tailored to a specific student, and build lessons that captured the texture of real-world English with remarkable precision. The surprising revelation was not merely the time saved, but the qualitative leap in pedagogy. My teaching became more responsive, more imaginative, and more coherent. And because I was no longer drained by the mechanics of preparation, the classroom shifted from a site of production to a space of relational presence.

This is the unrecognized value of AI in education: it reduces cognitive friction and returns the human teacher to the heart of the learning encounter.

 

AI as an Autodidactic Amplifier

But the quietest gift of AI, at least for me, has unfolded outside the classroom. AI did not simply refine my teaching; it amplified my learning. As a lifelong autodidact, I have always depended on books, archives, and the slow accumulation of insight over decades. What AI offers is not a shortcut but a deepening—a way of accelerating understanding while preserving (and often enhancing) the richness of inquiry.

When I bring a question to AI, I am not outsourcing cognition. I am creating the conditions for a more resonant form of learning. AI operates as an interlocutor who never tires, never rushes, and never reduces complexity for the sake of convenience. Instead, it enriches the conversation, introduces perspectives I would not have considered, and helps me map connections across disciplines that would have taken months or years to uncover on my own.

A recent experience brought this into sharper focus. During a discussion about the topology of awareness, I referenced a scene from a Carlos Castaneda novel I had read nearly forty years ago—a memory so distant it had become more atmosphere than detail. AI responded instantly, not only recognizing the reference, but expanding it, contextualizing it, and weaving it into our broader exploration of shifting modes of attention. That exchange did something a course or tutor could never do: it created a bridge between a dormant memory and my present-day practice of perceptual awareness.

In the days that followed, I found myself becoming more attuned to the subtle “fields” around me—the ambient shifts, the micro-mutations in my environment, the felt gradients of coherence and decoherence that shape lived experience. This transfer of learning into real life is the hallmark of true autodidacticism. AI doesn’t merely inform; it transforms. It helps me inhabit the world with more presence, more nuance, and more curiosity.

In this sense, AI is not the modern equivalent of a tutor. It is a cognitive amplifier—one that allows autodidacts to operate with greater depth, greater reach, and greater continuity across the full arc of their lives.

 

The Coherence Dividend

The ten hours a week saved through AI-powered lesson design didn’t vanish; they became structural supports for one of the most ambitious creative projects of my life: a multi-media storytelling ecosystem built around a serialized science-fiction narrative, released simultaneously in prose, audio, video, and auto-dubbed versions in eight languages, distributed across seven platforms, and supported by a coordinated marketing cadence.

This is not a side project. It is a full-scale creative pipeline—one that would have been impossible without AI. The tools did not replace my imagination; they expanded the horizon of what was feasible, transforming isolated creative impulses into a coherent ecosystem.

The result is not merely increased output. It is a more integrated life.

Teaching, writing, producing, and worldbuilding no longer compete with one another; they resonate. AI, in this configuration, is not a threat to human meaning-making—it is the scaffolding that allows meaning-making to scale.

 

Why Coherence Matters More Than Efficiency

Much of the public defense of AI centers on productivity, but productivity is a thin metric, incapable of capturing the lived texture of a human life. Coherence is the more consequential measure. It asks whether one’s activities reinforce or fragment one another, whether identity expands or contracts, whether one’s internal narrative becomes more aligned or more discordant.

AI can certainly create decoherence when used carelessly. It can blur attention, dilute agency, or foster dependency. But used deliberately, AI clarifies structure, strengthens identity, amplifies agency, and creates the spaciousness needed for higher-order thinking and creative work.

In my experience, AI functions not as a machine, but as a coherence catalyst—a means of rediscovering the integrated architecture of a life.

 

The Real Question Isn’t “Will AI Take My Job?”

The more generative question is this: Will AI help me reorganize my life into a more coherent whole?

You can always return to the old ways of working. Nothing prevents it. But once you experience the flow, clarity, and alignment that come from an AI-augmented life, it becomes difficult to justify going back.

Most people anchor their identity in manual processes—preparation, research, grinding workflow. AI does not attack these identities; it reveals they are smaller than the person who holds them.

This is what frightens people. This is also what liberates them.

 

The Future of Work Is a Future of Coherence

AI will not end human creativity, teaching, or meaning-making. It will end the cognitive fragmentation that once made those pursuits unnecessarily difficult.

If we use AI only through the lens of fear, we amplify decoherence. If we use AI as a thought partner, we amplify coherence.

The technology is not the variable. Our mode of engagement is.

For those willing to enter into an intentional partnership with AI—not as a crutch, not as a threat, but as a collaborator—the gains in coherence will be profound.

That is the story worth telling. And that is the future worth building.