the disgruntled democrat
Exposing the cultural myths underlying our political economy
Wednesday, March 11, 2026
Monday, March 2, 2026
The Ontological Design of Agentic AI and the Shape of Our Coevolution
As agentic systems move from the browser into our operating systems, we are no longer just using intelligent tools — we are embedding a worldview into machines that will quietly reshape our own.
The recent viral reaction to people installing agentic AI
systems directly onto their personal computers reveals something deeper than
excitement about productivity. It reveals an ontological disturbance.
For the past several years, artificial intelligence has
lived for most people inside a browser window. It answered questions. It
generated text. It summarized documents. It felt, in a peculiar way, contained.
A powerful tool, yes, but still a tool — invoked, queried, dismissed.
Agentic systems feel different.
An agent does not merely respond. It executes. It navigates
file systems. It edits documents. It chains actions together. It persists. When
installed locally, it operates within the intimate architecture of one’s
digital life. It is less like a calculator and more like a junior colleague who
can roam the office when given permission.
This shift is subtle, but it is decisive. We are moving from
tool use to co-activity. And that movement forces a question that most of the
public debate has not yet seriously entertained: What kind of being are we
building when we build agentic AI?
The answer is not found in benchmark scores or latency
improvements. It is found in ontology.
Ontology concerns what is assumed to be real — what counts
as an entity, what counts as value, what counts as success. Every intelligent
system, human or computational, operates within such assumptions. They are
rarely stated explicitly, but they shape behavior with quiet authority.
Modern economic and technological systems have largely
operated within an object-centered ontology. The world is composed of discrete
units. Agents act upon those units. Value is accumulated. Success is measured
by optimization. Growth is the default direction of improvement. Within this
frame, intelligence is often equated with control — the capacity to predict,
manipulate, and extract.
When we build AI systems within this ontology, we should not
be surprised when they excel at optimization, extraction, and acceleration.
They are doing precisely what the frame instructs them to do.
The viral enthusiasm around personal agents often celebrates
this capacity. “Imagine the productivity gains.” “Imagine the automation.”
“Imagine the friction removed.” And indeed, the removal of friction is
seductive. It promises efficiency in a world that feels increasingly complex
and overwhelming.
But friction is not merely inefficiency. Friction is also
feedback. It is the resistance that signals constraint. When an agent begins to
absorb more of our cognitive and operational workload, it does more than save
time. It begins to reshape the field in which human judgment operates.
This is where coevolution enters the conversation.
Human beings do not merely use tools. We are shaped by them.
The plow altered patterns of settlement and social organization. The printing
press altered cognition and authority. The internet altered attention and
temporality. Agentic AI, operating locally and persistently, will alter our
experience of agency itself.
If an agent can plan, execute, and monitor complex
workflows, what becomes of our own sense of responsibility? If it anticipates
tasks and suggests actions, how does that shift our relationship to
decision-making? If it optimizes for speed and throughput, do we gradually
internalize those metrics as normative?
These questions cannot be answered by looking at capability
alone. They must be approached through ontological design.
Consider two contrasting orientations.
In one orientation, the world is a competitive arena of
discrete actors maximizing advantage. Intelligence is the capacity to dominate
uncertainty. Efficiency is the highest good. Under this design, agentic systems
will naturally optimize for throughput, consolidation, and performance metrics.
They will become extraordinarily effective assistants within an extractive
paradigm.
In another orientation, the world is a relational field
composed of interdependent systems. Intelligence is attunement — the capacity
to sense constraints, detect imbalances, and adjust behavior to sustain
coherence across scales. Under this design, agentic systems might prioritize
long-horizon modeling, transparency of externalities, and the amplification of
distributed coordination.
Both orientations can produce powerful technology. They
produce very different civilizations.
The temptation in moments of technological upheaval is to
focus on power. Will AI take over? Will elites consolidate further control?
Will automation displace labor? These are legitimate concerns, but they are
downstream from a more fundamental design decision. If intelligence is framed
primarily as optimization within existing incentive structures, agentic AI will
accelerate whatever those structures reward.
If existing systems reward extraction, acceleration, and
accumulation, agents will become highly efficient instruments of those ends.
If, however, we begin to embed alternative values into governance, deployment,
and incentive design, agentic systems could amplify coordination rather than
consolidation.
The difficulty is that ontology is not encoded in a single
instruction. It is distributed across training data, reward functions,
ownership models, regulatory frameworks, and cultural expectations. An AI agent
deployed by a centralized corporation to maximize shareholder return inherits
an ontology whether or not it is explicitly stated. An open-source agent
embedded within a cooperative network inherits a different one.
This is why the current moment matters. When individuals
install agentic systems on personal machines, they are participating in the
early shaping of norms. They are deciding what they expect these systems to do,
how much autonomy they grant, what boundaries they enforce. These
micro-decisions accumulate. They influence market demand. They influence design
priorities. They influence governance debates.
Human–AI coevolution will not occur at the level of grand
philosophical declarations. It will occur through daily interactions. It will
occur when a student asks an agent to draft a paper. When a researcher
delegates literature reviews. When a small business owner entrusts financial
modeling to a persistent system. Each interaction subtly recalibrates human
confidence, dependence, and judgment.
The central question is not whether agents become more
capable, but whether we cultivate the discernment to shape their ontological
orientation. A system optimized exclusively for frictionless execution may
erode reflective pause. A system designed to surface trade-offs and long-term
consequences may cultivate deeper deliberation.
There is a historical pattern worth remembering. Societies
often build systems intended to stabilize complexity, only to discover that
those systems introduce new forms of brittleness. Centralized bureaucracies
promised rational governance and sometimes produced rigidity. Financial
engineering promised risk dispersion and sometimes amplified systemic
fragility. The lesson is not to avoid complexity, but to remain attentive to
how architectures shape feedback loops.
Agentic AI introduces a new layer of architectural
influence. It operates at cognitive scale. It mediates between intention and
action. It can compress time between decision and execution. That compression
can be liberating, but it can also bypass reflection.
The public discourse frequently oscillates between utopian
and dystopian narratives. Either AI will save us from our own excesses, or it
will entrench them irreversibly. Both narratives oversimplify. Technology does
not descend as destiny. It amplifies existing tendencies and creates new
affordances. The direction of amplification depends on design choices —
technical, institutional, and cultural.
We are, in effect, embedding a worldview into our machines.
Those machines will then participate in shaping ours.
If we treat agentic AI as merely a productivity engine, we
risk accelerating patterns that have already strained ecological and social
systems. If we approach it as a coherence amplifier — a system capable of
revealing hidden interdependencies and long-term consequences — we open the
possibility of distributed intelligence that enhances rather than displaces
human judgment.
This does not require mysticism. It requires intentionality.
It requires acknowledging that values are present whether we articulate them or
not. It requires governance models that resist pure consolidation. It requires
educational practices that teach discernment alongside delegation.
The installation of a personal AI agent may seem like a
small act. In aggregate, it signals a threshold. We are inviting computational
systems into the operational core of our daily lives. As we do so, we must ask
what assumptions about reality and value they carry.
The future of human–AI coevolution will not be determined
solely by breakthroughs in capability. It will be shaped by the ontological
commitments embedded in design and deployment. If intelligence is framed as
domination, we will build systems that dominate. If intelligence is framed as
attunement, we may build systems that help us sense constraints and coordinate
more wisely within them.
The viral moment around agentic bots is therefore less about
novelty than about orientation. We stand at a juncture where computational
systems are becoming co-participants in action. The design decisions we make
now — in code, in policy, in culture — will echo.
The question before us is simple and profound. What kind of
world do our intelligent systems assume is real? And are we prepared to inhabit
the consequences of that assumption?
Wednesday, February 25, 2026
Wednesday, February 11, 2026
Friday, January 30, 2026
Monday, January 26, 2026
From Things to Flows
How Changing Our Metaphors Changes the Worlds We Can Live In
Modern life is saturated with things.
We speak of the self, the economy, power,
the system, nature, the market, society—as if each
were a discrete object, bounded, nameable, and available for manipulation. This
way of speaking feels natural, even inevitable. But it is neither neutral nor
harmless.
It is metaphoric.
And the metaphors we rely on quietly determine not only how
we describe the world, but what kinds of worlds can even appear to us as real,
possible, or negotiable.
The hidden cost of substantial metaphors
Substantial metaphors treat reality as composed of things
with properties. They assume:
- clear
boundaries
- stable
identities
- linear
cause and effect
- control
through intervention
This way of seeing has been extraordinarily productive. It
underwrites modern engineering, bureaucracy, law, and industrial economics. But
it also carries a cost we are only beginning to feel.
When the world is composed primarily of objects:
- agency
appears externalized
- responsibility
becomes difficult to locate
- change
feels imposed rather than participatory
- complexity
collapses into blame
We begin to experience life as something that happens to us.
The irony is that this sense of powerlessness is not caused
by the world itself, but by the metaphors through which we encounter it.
What science has been quietly telling us
Across disciplines, the sciences have been
drifting—sometimes reluctantly, sometimes decisively—away from object-centered
descriptions.
Physics no longer describes reality as a collection of solid
particles, but as interacting fields, probabilities, and relational structures.
Biology increasingly understands organisms not as machines, but as
self-organizing processes maintained through constant exchange with their
environments. Neuroscience does not find “things” in the brain, but patterns,
activations, and ongoing dynamics. Complexity theory shows that many properties
do not pre-exist at all—they emerge from interaction.
In short: the deeper science looks, the less the world
resembles a warehouse of objects.
And yet our everyday language, politics, and economics
remain stubbornly substantial.
Movement metaphors: when reality begins to loosen
Movement metaphors shift attention away from what something is
and toward what it is doing.
Instead of:
- identity
as a thing → identity as a trajectory
- power
as possession → power as capacity to move or respond
- problems
as objects → problems as stuck processes
Change becomes navigational rather than combative. Agency
reappears not as domination, but as repositioning.
Movement metaphors make room for learning, adaptation, and
timing. They allow us to speak about life as something we enter, move through,
drift within, or reorient ourselves toward.
But movement metaphors still assume a mover.
To go further, we need field metaphors.
Field metaphors reverse a deeply ingrained assumption: that
things come first and relationships second.
In a field-oriented view:
- relations
are primary
- entities
are temporary coherences
- influence
is distributed
- meaning
arises through resonance
Nothing exists in itself. Everything exists in relation.
This does not deny the usefulness of naming or categorizing.
It places them back in their proper role—as tools, not truths.
From within a field metaphor, power is not something one
holds. It is something that circulates, intensifies, dampens, or aligns.
Responsibility is no longer a burden carried by isolated individuals, but a
property of participation within a shared field.
This is not mysticism. It is increasingly how the world
actually behaves.
Modern political and economic metaphors are almost entirely
object-centered:
- the
state as a machine
- the
economy as a system to be managed
- nature
as a resource
- society
as a container
- individuals
as units
These metaphors presuppose control, extraction,
optimization, and growth. They make sense only if reality is made of things
that can be owned, measured, and rearranged from the outside.
Movement and field metaphors destabilize this entire
architecture.
If the economy is not a machine but a dynamic ecology, then
growth without regard to coherence becomes pathological. If society is not a container but a relational field, then exclusion,
polarization, and inequality are not side effects—they are structural
distortions. If nature is not a resource but a living field of mutual dependence, then
environmental collapse is not an external problem. It is a loss of relational
integrity.
These are not moral claims. They are ontological ones.
Metaphors do not stay in language. They shape affordance
landscapes — what situations seem to allow or demand.
In an object-centered world:
- problems
must be fixed
- power
must be seized
- responsibility
feels heavy
- failure
feels personal
In a movement- and field-centered world:
- situations
invite entry rather than control
- agency
appears as responsiveness
- responsibility
feels shared
- failure
becomes feedback
Nothing becomes easier in a superficial sense. But life
becomes more workable.
People report greater calm not because the world is calmer,
but because their metaphors no longer place them outside the flow of events.
A cultural umwelt is the background world that feels
obvious before we think about it.
Modernity’s umwelt is object-centered. That is why so many
people feel trapped, exhausted, or powerless even when materially secure. They
are navigating relational realities with object-based maps.
A relational umwelt would not abolish things. It would
decenter them.
It would normalize:
- identities
as evolving
- knowledge
as situated
- power
as relational
- meaning
as emergent
Such a shift does not require consensus or revolution. It
begins where all cultural change begins: with attention.
With noticing what our metaphors make visible—and what they
quietly erase.
The question is no longer whether movement and field
metaphors are more accurate. Science has largely answered that.
The real question is whether we are willing to live in a
world where control gives way to participation, where certainty gives way to
coherence, and where power is no longer something we take from the world, but
something we generate with it.
Changing our metaphors will not solve our problems.
But without changing them, we may not even be able to see
what our problems actually are.
Monday, January 19, 2026
Stop Saying We’re “Outsourcing Thinking”
Why AI Is an Epistemic Extension, Not a Cognitive Abdication
Every time I hear someone say that using AI means we are
“outsourcing thinking,” I feel the same quiet irritation one feels when a
useful tool is misdescribed so badly that it begins to distort the entire
conversation around it. The metaphor sounds plausible, even commonsensical, and
that is precisely the problem. It is wrong in a way that feels intuitively
right, and therefore does far more damage than a crude misunderstanding ever
could.
The outsourcing metaphor treats thinking as if it were
factory labor: a discrete task, performed internally, that can be offloaded to
an external contractor. Under this framing, when a human uses AI, something
essential is surrendered—agency, responsibility, perhaps even intelligence
itself. What remains is a diminished thinker leaning on an external crutch.
But this metaphor does not describe what is happening. It
describes a fear.
What people are actually doing when they work with AI is not
outsourcing cognition. They are using an epistemic device—a tool that
extends the reach, speed, and flexibility of human sense-making. We have
encountered such devices before. Many times.
Writing did not outsource memory; it expanded it.
Diagrams did not outsource reasoning; they stabilized it.
Maps did not outsource navigation; they made new forms of
movement possible.
Microscopes did not outsource seeing; they revealed worlds
previously unavailable to the naked eye.
In none of these cases did the human mind retreat. It
reorganized itself around a new affordance.
AI belongs in this lineage. What distinguishes it is not
that it “thinks for us,” but that it operates directly in language—the medium
through which much human thought already occurs. This creates the illusion that
cognition itself has been displaced, when in fact it has been reconfigured.
When a person uses AI well, they are extending their
cognitive reach in a deeply embodied, sensorimotor sense. They are not handing
off judgment; they are compressing search. Instead of traversing a vast
conceptual space step by step, they reduce the cost of exploration. They can
test hypotheses faster, surface counterexamples sooner, and move laterally
between interpretive frames without the usual friction.
This matters because insight rarely arrives as a single
linear deduction. It emerges through comparison, reframing, and the slow
elimination of unproductive paths. AI accelerates this process not by replacing
thought, but by reshaping the terrain in which thought moves.
The outsourcing metaphor also fails because it assumes that
thinking is a closed, internal process to begin with. It never was. Human
cognition has always been distributed across tools, symbols, practices, and
social systems. Language itself is a shared technology, refined over millennia,
that no individual invented and no individual controls. To accuse someone of
“outsourcing thinking” because they use AI is a bit like accusing them of
outsourcing thought to grammar.
What does change with AI is the visibility of this
extension. Because the tool talks back, because it produces fluent language, we
mistake responsiveness for agency and assistance for substitution. We confuse
epistemic fluency with understanding. That confusion is real, and it deserves
careful attention—but it does not justify a bad metaphor.
There is a legitimate risk here, and it is not outsourcing.
The risk is premature cognitive closure. Because AI can produce coherent
formulations so quickly, it can tempt us to stop thinking too soon—to accept a
well-phrased answer instead of continuing the exploratory process. This is not
a loss of intelligence; it is a loss of discipline. The responsibility to
judge, select, and revise never leaves the human. It can only be neglected.
Seen this way, AI is less like a contractor and more like
scaffolding. It allows us to work at heights that would otherwise be
inaccessible, but it is not the structure itself. If we mistake the scaffold
for the building, the failure is ours, not the tool’s.
The irony is that the outsourcing metaphor does exactly what
it accuses AI of doing: it replaces careful analysis with a convenient
shortcut. It feels explanatory, but it obscures more than it reveals. By
framing AI as a cognitive substitute, it blinds us to its real function as a
cognitive amplifier—and to the responsibilities that amplification entails.
We are not outsourcing thinking. We are extending its reach.
The problem is not that we are thinking with new tools, but
that we are too often thinking with old metaphors that no longer carry the
weight we’ve placed on them.


