Thursday, October 2, 2025

From the Great Acceleration to the Great Enshitification and Beyond: Part Two


Part Two: Missed Opportunities, the Great Enshitification, the Consequences for the Young, and the Age of Flux

 

The Missed Moment

The end of the Cold War in 1989 was supposed to open a new chapter. With the fall of the Berlin Wall and the collapse of the Soviet Union, Americans were told that history itself had ended—that liberal democracy and free markets had triumphed once and for all. For a brief moment, it seemed as if the United States might redirect the vast resources once devoted to military competition into a “peace dividend”: rebuilding infrastructure, expanding education, addressing poverty, and perhaps even taking early action on the environment.

That moment never came.

Instead, the 1990s became a decade of missed opportunities. The neoliberal consensus, now bipartisan, turned away from social investment and doubled down on globalization, deregulation, and the technological boom. Bill Clinton, elected on the promise of a new kind of Democrat, embraced free trade, loosened financial rules, and celebrated the market as the engine of progress. For ordinary Americans, the message was clear: government would no longer guarantee security or prosperity—it was up to the individual to adapt, hustle, and compete.

Meanwhile, the scientific evidence on climate change was already mounting. By 1988, NASA’s James Hansen had testified before Congress that global warming was underway. The Intergovernmental Panel on Climate Change (IPCC) was established the same year. The link between fossil fuel combustion and rising greenhouse gases was no longer speculative; it was measurable, observable, and widely understood among scientists. Yet the political will to act never materialized. The United States signed but never ratified the Kyoto Protocol. Fossil fuel interests, well-funded and politically connected, sowed doubt and confusion, successfully delaying action at the very moment when intervention could have altered the trajectory.

Culturally, too, the 1990s revealed a shift. The decade was suffused with optimism about the digital future—Silicon Valley promised a frictionless world of connection and innovation. But beneath the hype, the social fabric was fraying. The dot-com bubble inflated a speculative economy, while traditional industries continued to wither. Communities built on manufacturing hollowed out, replaced by service jobs that paid less and offered fewer protections. For many young people entering adulthood, the promise of upward mobility felt increasingly fragile.

The missed moment was not only about economics or climate—it was about governance itself. The flaws in America’s political system became harder to ignore. The Electoral College allowed a president to lose the popular vote and still win the White House. Senate representation gave disproportionate power to smaller, rural states. And campaign finance—already awash in corporate influence—tightened its grip. Ordinary citizens, seeing their voices diluted, began to disengage, deepening a cycle of political alienation.

Then there was the violence. School shootings, once unthinkable, became part of the national landscape. Columbine in 1999 shocked the country, but instead of catalyzing meaningful reform, it became the grim template for a recurring nightmare. Sandy Hook would follow in 2012, and countless other tragedies in between. Each time, the response was paralysis—thoughts and prayers instead of legislation. The inability to address such a glaring public safety crisis revealed a government increasingly incapable of acting on behalf of its citizens, even in the face of horror.

Looking back, the 1990s and early 2000s were a hinge point. The United States had the wealth, the technology, and the global standing to redirect its trajectory—to build a more sustainable economy, strengthen its social fabric, and restore faith in democratic governance. Instead, the opportunity slipped away. Growth was celebrated, but inequality widened. Climate warnings were heard but ignored. Governance flaws were visible, but unaddressed.

This was the missed moment: the chance to pivot from acceleration to sustainability, from neoliberalism to renewal. Instead, America doubled down on a system already beginning to show signs of strain. The consequences of that inaction would not be felt immediately, but when they arrived, they would fall hardest on the generations who had no say in squandering the opportunity.

 

The Great Enshitification

The internet was once hailed as humanity’s new frontier, a digital commons where knowledge would flow freely and barriers of geography, class, and gatekeeping would fall away. In the 1990s and early 2000s, there was a real sense of possibility: search engines that promised to catalog the world’s information, forums that connected strangers across continents, platforms that allowed anyone with a modem to publish, share, and participate. For a generation, this was intoxicating—the promise of democracy reborn in the ether of cyberspace.

But what began as liberation has hardened into enclosure. The open, decentralized internet has steadily given way to walled gardens controlled by a handful of corporations whose business model depends not on empowerment, but on capture. This transformation, which writer Cory Doctorow has memorably dubbed “enshitification,” follows a familiar trajectory: platforms start out good to lure users, then become exploitative to serve advertisers, and finally degrade outright as monopolies extract value from everyone—users, workers, creators—until little remains but a hollowed-out husk.

Social media embodies this descent most clearly. What began as a way to connect with friends or share updates became, by the 2010s, a system optimized to keep eyes glued to screens. Algorithms were tuned not for truth, not for depth, but for engagement—which often meant outrage, misinformation, or spectacle. Advertising dollars rewarded the most inflammatory content, while meaningful discourse was buried. For creators, the platforms promised visibility but delivered precarity: one tweak of the algorithm, and entire livelihoods vanished.

E-commerce followed a similar path. Amazon, once lauded for its convenience and selection, consolidated power through predatory pricing, relentless surveillance of sellers, and exploitative labor practices. Independent businesses were absorbed, crushed, or made dependent on a platform that could change the rules at will. Consumers enjoyed convenience, but at the cost of diminished choice, lower quality, and a system where the profits accrued not to communities but to a centralized behemoth.

Even the search engines that once seemed like the great liberators have been corroded. Where once search results offered pathways into the web’s vast archives, they now increasingly prioritize paid placements, SEO-gamed content mills, and the platforms’ own properties. The open web survives, but as a shadow of itself, buried under a layer of corporate sludge. The promise of discovery has given way to a kind of digital claustrophobia.

The deeper cost of enshitification, however, is not technical—it is civic and psychological. The internet that might have expanded our collective imagination has instead narrowed it, filtering experience through metrics of virality and monetization. It has eroded trust, blurred the line between fact and fiction, and rewarded polarization over consensus. Worse, it has left us dependent on systems we do not control. As ordinary users, we have little recourse when platforms implode or pivot. Our digital lives—our communications, archives, creative work—are hostage to the whims of executives and the imperatives of quarterly earnings reports.

This was not inevitable. Different choices in regulation, ownership, and design could have fostered a more democratic digital sphere. But as with earlier moments in America’s trajectory, profit was prioritized over stewardship. The internet was not nurtured as a public good; it was strip-mined as a private asset. And so the cycle repeated: early abundance followed by consolidation, enclosure, and extraction.

By the 2020s, the pattern had become impossible to ignore. What once felt like progress now felt like decay—an acceleration into diminishing returns. The promise of the digital frontier had curdled into a system where everything worked worse, cost more, and left its users more isolated, surveilled, and exhausted.

The great enshitification is not only a story about technology. It is a parable of late capitalism itself: how systems built on the logic of endless growth inevitably turn parasitic, consuming the very resources that gave them life. The missed moment of the 1990s meant that by the time these dynamics were clear, the infrastructure of daily life—from communication to commerce to entertainment—was already entangled in systems designed for extraction.

In that sense, enshitification is less an aberration than a symptom: a mirror reflecting the deeper exhaustion of the American project.

 

The Consequences for the Young

If the Great Acceleration promised a future of rising tides, and the Neoliberal Turn recalibrated that promise toward individual risk, the Great Enshitification has made clear that the deck is stacked against most young people today. The rewards of society’s labor and innovation, once broadly shared, are now increasingly concentrated at the top. For the generations coming of age in the 2000s and 2010s, the American Dream is no longer a horizon toward which they can steer—it is a mirage whose shape constantly shifts.

Economic precarity defines much of their experience. Student debt has become a millstone: the promise of higher education as a pathway to prosperity is now undermined by loans that often exceed the starting salaries of graduates. Housing, once attainable in a postwar boom fueled by unions and a growing middle class, is now prohibitively expensive in cities where jobs cluster. Renting consumes ever-larger portions of income, while homeownership feels out of reach except for those who inherit wealth. Jobs themselves are unstable, increasingly automated, and often offer no benefits, leaving young people juggling gig work, temporary positions, and the perpetual fear of displacement by technology.

Health and well-being have also deteriorated. Obesity, diabetes, anxiety, depression, and other chronic conditions reflect both lifestyle and systemic factors: ultra-processed food, sedentary work, and an environment saturated with stressors. Mental health crises have become normalized, yet support remains inadequate. For many, the intersection of financial insecurity and societal neglect cultivates a constant low-level anxiety, a sense that the future is something to survive rather than shape.

Culturally, the erosion of trust extends to institutions that once promised guidance and protection. Politics feels distant, skewed by money, structural inequalities, and procedural quirks—from the Electoral College to Senate malapportionment—that amplify the voice of the few over the many. Young people witness elections decided by the narrowest margins or by systemic quirks that ignore the popular vote. Decisions about the environment, healthcare, and social welfare are dominated by lobbying and campaign finance, leaving ordinary citizens to absorb the consequences. The sense of agency, once foundational to civic engagement, is undermined.

Social life, too, bears the scars of historical choices. The dispersal of families in the postwar suburban migration, combined with the dissolution of stable community networks, has produced isolation. Loneliness is pervasive, compounded by digital engagement that connects superficially while amplifying comparison, envy, and disconnection. School shootings and mass violence reinforce the sense of vulnerability and powerlessness, while the failure of policy interventions signals that safety is contingent on wealth or luck rather than collective protection.

All of this shapes a worldview that is fundamentally different from that of the postwar generation. Whereas the youth of the 1960s and 1970s believed in their capacity to change the world, today’s young adults and teenagers are more likely to aim for survival, stability, and incremental gains. Their horizon is constrained by debt, climate anxiety, and the fallout of policy choices they did not make. Dreaming big is difficult when the scaffolding of opportunity has been removed.

And yet, even amid these challenges, the human capacity for adaptation persists. Networks of activism, mutual aid, and technological savvy show that young people are not entirely passive recipients of systemic failure. They are learning to navigate, hack, and sometimes resist the structures that constrain them. But the weight of history—of missed opportunities, neoliberal policy, and societal erosion—presses down relentlessly, shaping a generation whose expectations are measured not in the grandeur of achievement, but in the mitigation of harm.

In short, the consequences of the previous decades—the Postwar Dream deferred, the acceleration unchecked, the neoliberal turn embraced, the missed moment unheeded, and the enshitification realized—land disproportionately on those least responsible for creating the system. The young inherit not a dream, but a landscape defined by constraint, compromise, and crisis management.

 

The Age of Flux

We live now in an era that defies simple description: an Age of Flux in which the foundations of society, economy, and environment are all in motion, often at once. The forces unleashed by the Great Acceleration, the Neoliberal Turn, and the ensuing enshitification have produced a world in which stability is no longer the default, and certainty is a fragile illusion.

Economically, globalization and technological transformation continue to reshape labor markets at dizzying speed. Automation, artificial intelligence, and platform economies are replacing and restructuring jobs, often faster than workers can retrain. Financial systems are increasingly abstract, global, and interdependent, with shocks propagating rapidly across continents. Economic inequality, having widened for decades, is now a structural feature of society rather than a temporary aberration.

Socially and culturally, the consequences are profound. Trust in institutions—government, media, education, and corporations—remains eroded. Digital platforms mediate much of life, shaping perception and discourse while simultaneously enabling both connection and manipulation. Climate change, resource scarcity, and biodiversity loss present challenges that are both global and existential, forcing humans to confront limits that were invisible to the postwar generation. The youth of today inherit a world in which the future is uncertain, fluid, and often threatening.

Yet within flux lies possibility. The very systems that destabilize can also catalyze adaptation and innovation. Movements for social justice, environmental stewardship, and participatory governance demonstrate that citizens can reclaim agency, even in constrained conditions. Digital tools, while imperfect and often exploitative, also enable unprecedented communication, collaboration, and mobilization. The challenge—and opportunity—of the Age of Flux is to navigate complexity while retaining sight of shared purpose.

This age calls for creative resilience: the capacity to imagine, experiment, and act in ways that do not rely on the old scaffolding of stable growth, linear progress, or inherited privilege. It asks us to recognize interdependence rather than individual ascendancy, to cultivate systems that prioritize stewardship over extraction, and to balance human aspiration with ecological and societal limits.

In many ways, the Age of Flux is a reckoning with history. It is the culmination of the Postwar Dream’s promise, the Great Acceleration’s momentum, the neoliberal recalibration of the social contract, the missed opportunities of the 1990s, and the enshitification of digital and economic systems. It is the world shaped by choices—collective, political, and technological—that were made over the last seventy-five years.

But it is also a world of agency. While the past cannot be rewritten, understanding the threads that brought us here allows for deliberate intervention, for designing societies, economies, and technologies that serve broad human and planetary well-being. The Age of Flux is, paradoxically, both a warning and an invitation: a warning that the status quo is fragile, and an invitation to imagine, innovate, and act in ways that renew possibility rather than diminish it.

Monday, September 29, 2025

From the Great Acceleration to the Great Enshitification and Beyond

 



Part One: How the Great Acceleration Gave way to Neoliberalism and Globalization

The Postwar Dream

In 1945, the world exhaled. The devastation of the Second World War left cities in ruins and millions dead, but it also left a strange kind of clarity. Out of the rubble, there emerged a vision of a future that might at last deliver peace and prosperity. In the United States, that dream took on a distinctive shape: stable jobs, modest but growing wealth, a single-family home, and the promise of upward mobility for one’s children.

This was not a dream pulled out of thin air. It was built on the hard-won foundations of the New Deal, which had established the principle that government bore responsibility for the welfare of its citizens. Combined with the unprecedented economic engine of the Petrocene — the age of cheap oil and seemingly limitless energy — the stage was set for what the French would later call les trente glorieuses, the thirty glorious years of postwar growth.

For ordinary Americans, this translated into something tangible. The GI Bill sent millions of veterans to college, giving them access to professional jobs that had once been closed to their families. Unions were strong, wages rose steadily, and productivity gains translated into broad prosperity rather than being siphoned off into the pockets of a few. The fiscal architecture of the era reinforced this balance: progressive taxation, both on individuals and corporations, meant that wealth was not allowed to concentrate in quite the same way it would later.

Culturally, the suburban home became the icon of the dream. The postwar migration to the suburbs was not simply about shelter; it was a reshaping of American life. The little house with a yard symbolized stability, autonomy, and entry into the middle class. Yet it also carried with it consequences that were not immediately obvious. Suburbanization tied prosperity to the automobile, embedding car culture into the nation’s DNA. It also restructured family and community life, dispersing extended families and weakening older neighborhood ties in favor of nuclear households orbiting around highways and shopping centers. What looked at the time like a promise fulfilled would later contribute to the loneliness epidemic of the twenty-first century.

The optimism of the period was palpable. Children born in the 1950s and 1960s grew up with a sense that each decade would be better than the one before. They lived in an America that had defeated fascism abroad, was engaged in building the Great Society at home, and seemed poised to extend its prosperity indefinitely. It was not naïve to believe in progress; it was the common sense of the age.

This was the Postwar Dream: a belief that collective effort, guided by government, powered by industry, and spread across society, could deliver a good life for all, an underlying promise that shaped a generation’s imagination of what was possible.

That dream, however, would not remain untouched. The forces that made it possible — the energy bounty of the Petrocene, the discipline of progressive taxation, the faith in collective action — would all, in time, be undermined. What began as a dream would slowly mutate, first into acceleration, then into something far more precarious.

The Great Acceleration

By the mid-twentieth century, the Postwar Dream had found its fuel. The vast energy bounty of oil, coal, and natural gas — combined with technological innovation and an industrial base untouched by the devastation of war — propelled the United States and much of the Western world into a period of breathtaking expansion. Historians now call this period the Great Acceleration: a rapid and near-exponential surge in population, production, consumption, and environmental impact.

It is difficult to overstate the scale of this transformation. Global population doubled between 1950 and 1987. Car ownership, air travel, electricity use, fertilizer application, and plastic production all shot upward in curves so steep they look almost vertical on a chart. What had been linear growth in the early twentieth century became exponential in the decades after the war. For a generation raised on the promise of endless progress, this looked like vindication of the dream.

In the United States, the suburb became the primary stage on which the acceleration unfolded. The migration outward from cities was fueled by cheap mortgages, new highways, and the promise of safety and space. The suburban landscape demanded cars, and cars demanded oil. Daily life became inseparable from the rhythms of the internal combustion engine. For a while, this dependence felt liberating — mobility meant opportunity. But it also locked American society into a high-energy, high-consumption pattern that would prove difficult to reverse.

The Great Acceleration was not only material; it was cultural. The promise of upward mobility became a kind of social contract. The children of working-class families expected to go further than their parents, and often did. University enrollments soared. Home ownership expanded. Consumer culture blossomed with television, advertising, and mass-produced goods that symbolized status as much as utility. From Tupperware parties to Disneyland vacations, the markers of modern life were suffused with a sense of novelty and abundance.

Yet beneath the optimism lay contradictions. The benefits of acceleration were not evenly distributed. Redlining and housing discrimination locked Black families out of the suburban boom. Indigenous communities bore the brunt of resource extraction. And the prosperity of the industrial West was underwritten by a global system that treated the Global South as a reservoir of cheap labor and raw materials.

Most ominously, the environmental consequences of acceleration were already becoming visible. Rachel Carson’s Silent Spring (1962) sounded the alarm about pesticides and ecological fragility. Smog choked Los Angeles, rivers caught fire, and oil spills stained coastlines. Scientists were beginning to warn about the link between fossil fuel combustion and atmospheric change. Still, for most citizens, the exhilaration of growth drowned out the early signals of danger.

In retrospect, the Great Acceleration can be seen as a high-wire act: a dazzling display of human ingenuity, powered by finite resources, premised on the assumption that the Earth could absorb limitless extraction and waste. For those who lived through it, it was often thrilling. But it also set in motion the crises that would later define the twenty-first century — climate disruption, ecological collapse, and a social order increasingly unable to deliver on the promises it once made.

The dream had become a race, and the pace of that race left little room for reflection. The sense of inevitability — that tomorrow would always be bigger, faster, and better than today — was intoxicating. But it was also a trap. When the momentum faltered, the consequences would be profound.

The Neoliberal Turn

By the late 1970s, the confidence that fueled the Great Acceleration was starting to crack. Stagflation — an unfamiliar mix of economic stagnation and inflation — shook the assumptions of endless growth. The oil shocks of 1973 and 1979 made it clear that the Petrocene’s bounty was neither stable nor inexhaustible. Industrial jobs began to vanish as manufacturing moved offshore. For the first time since the war, a generation looked ahead and doubted whether they would be better off than their parents.

Into this climate of uncertainty stepped a new ideological project: neoliberalism. Popularized by figures like Margaret Thatcher in the United Kingdom and Ronald Reagan in the United States, it promised to break free from the burdens of regulation, taxation, and government intervention. The narrative was seductive in its simplicity: government was the problem, not the solution. If markets were liberated — if taxes on the wealthy were slashed, unions curbed, industries deregulated, and finance unleashed — then prosperity would return, and “all boats would rise with the tide.”

What made the neoliberal turn so effective was its emotional appeal. It harnessed the frustration of citizens who felt left behind and reframed it as a revolt against bureaucracy, inefficiency, and welfare “dependency.” It aligned itself with cultural conservatism, draping free-market ideology in the language of freedom, patriotism, and even religion. In Reagan’s America, laissez-faire economics became bound up with the idea of American exceptionalism itself.

The economic sleight of hand was profound. For three decades, prosperity had been measured by rising GDP, but it had also been sustained by progressive taxation that ensured wealth was broadly shared. Neoliberalism rewrote the script: by cutting taxes on corporations and the rich, it claimed, growth would accelerate and benefits would “trickle down.” The Laffer Curve, with its laughably simple promise that lower taxes could increase revenue, became the talisman of the age. The public bought in, fueled by the dream that anyone — if they worked hard enough, or got lucky enough — could be rich.

In practice, the effects were corrosive. Wealth concentrated at the top. Wages stagnated for the middle and working classes. Social programs were rolled back under the banner of fiscal responsibility. The bipartisan embrace of free-market policies — from Thatcher and Reagan to Clinton and Blair — signaled that the social-democratic vision of the postwar era had been decisively abandoned.

Culturally, the ethos shifted. Where the youth of the 1960s had believed they could change the world, the prevailing mood by the 1980s and 1990s was “look after number one.” The mantra of Wall Street — greed is good — escaped into popular consciousness, no longer a cautionary line from a movie villain but a guiding principle of economic life. The promise of collective uplift was replaced by a lottery mentality, epitomized by reality shows, stock-market speculation, and the rise of Silicon Valley entrepreneurs as cultural icons.

Neoliberalism also reshaped governance itself. Campaign finance laws were loosened, culminating in the Citizens United decision of 2010, which enshrined the power of money in politics. Electoral institutions already skewed by the Electoral College and Senate representation became even more distorted by the influence of corporate lobbying. Increasingly, politics became something done to people, not for them — a performance staged by elites with the financial means to shape outcomes.

In retrospect, the neoliberal turn was less a solution to the crises of the 1970s than a redirection of power. It stabilized inflation, restored profits, and fueled globalization, but at the cost of deepening inequality and hollowing out the social contract. The Postwar Dream had been one of shared prosperity; neoliberalism recast prosperity as an individual gamble, where the risks and burdens fell on ordinary citizens while the rewards flowed upward.

The consequences of this turn were not immediately obvious. For a time, the stock markets boomed, consumer goods became cheaper, and credit cards extended the illusion of affluence. But underneath, the foundations were eroding. When the cracks widened, as they inevitably would, the cost would be borne not by the architects of neoliberalism but by the generations who came after.

In Part two, I’ll explore the opportunities missed during the 1990s and the Great Enshitification that ensued.

 

Monday, September 22, 2025

The Sorcerer’s Apprentice Syndrome: Why Gen Z Inherits Chaos Instead of Progress


 Shadows of the Depression, Glow of Victory

The trauma of the Great Depression shaped an entire generation. It left behind not just economic scars but a cultural longing for stability, prosperity, and abundance. When the guns of the Second World War finally fell silent, it seemed as though the long nightmare had ended. The postwar boom — what historians now call the Great Acceleration — appeared to fulfill those desires. Economies surged, suburban housing spread, consumer goods multiplied, and families who had once struggled to put food on the table now filled their homes with televisions, refrigerators, and automobiles.

This material abundance became the stage for a new kind of mass culture. Radio, cinema, and later television created a shared vocabulary across vast populations. Popular music, Hollywood movies, and televised sports didn’t just entertain; they offered a sense of belonging and identity. America’s cultural exports — from jazz to Coca-Cola — spread across the globe, projecting an image of modernity and freedom that was often more persuasive than its armies.

This was the golden age of American soft power. At home, prosperity was celebrated as proof of the system’s success. Abroad, American cultural influence became a potent weapon in the Cold War, countering the gray conformity of the Soviet bloc with blue jeans and rock ’n’ roll. And yet beneath the glow of domestic triumph lurked a stark contrast: America’s foreign policy record was riddled with failures and contradictions. While it spoke the language of liberty, it orchestrated coups in Iran and Guatemala, fought to a stalemate in Korea, and later mired itself in the tragedy of Vietnam. The world could see the gap between the promise of freedom and the practice of power.

Triumphs of Science, Selective Listening

The same duality played out in the realm of science and technology. Nothing symbolized the triumph of scientific ingenuity more vividly than the atomic bomb and the moon landing. One promised security through destructive power; the other embodied humanity’s highest aspiration, reaching for the stars. These moments defined the zeitgeist of the postwar period: science as the ultimate engine of progress and prestige.

But the celebration of science was selective. When scientific discoveries carried the promise of profit or geopolitical advantage, they were heralded as milestones. When they warned of restraint, caution, or long-term risks, they were brushed aside. The dangers of cigarette smoking were known for decades before they were acknowledged. The early warnings about greenhouse gas emissions in the 1970s were actively suppressed by the fossil fuel industry. In each case, science that complicated the narrative of growth and prosperity was muffled or ignored.

The Sorcerer’s Apprentice Syndrome

This pattern reveals a deeper problem, what might be called the sorcerer’s apprentice syndrome. Again and again, society has conjured powerful technologies into being without considering how to contain their consequences. Nuclear power, chemical agriculture, fossil fuels, plastics, and later digital platforms were each introduced with little thought to their potential downsides.

In the fairy tale, (not familiar with the tale? Watch the three-part video series of Disney’s Fantasia version on YouTube) the apprentice loses control of the magic he unleashes, only to be saved when the master returns to set things right. In our world, there is no wise magician to rescue us. The technologies we release become grand cultural and environmental experiments, their outcomes unknown, their risks often denied. The precautionary principle — the simple idea that we should err on the side of caution when consequences are uncertain — was rarely applied. Instead, we behaved as if growth itself were justification enough, as if the market would sort out any problems.

Cycles of Promise and Disappointment

Each wave of innovation began with a rush of promise, only to end in disillusionment.

The Great Acceleration promised prosperity, stability, and peace through technology. For a time, it delivered. But by the 1970s the shine had worn off. The Vietnam War exposed the limits of American power. Oil shocks revealed the fragility of energy dependence. Inflation eroded living standards. Environmental degradation — smog-filled skies, polluted rivers, endangered species — exposed the hidden costs of industrial abundance. The dream of endless growth had a bitter aftertaste.

The information and communication technology (ICT) revolution offered a new promise. The internet was supposed to democratize knowledge, empower individuals, and create a more connected and creative world. Social media promised to bring people closer, amplifying voices that had long been silenced. For a brief period, it felt as if history had turned a corner. But the disappointments piled up quickly. The internet became dominated by surveillance capitalism, harvesting personal data for profit. Social media fueled polarization, disinformation, and political extremism, while exacerbating mental health crises among young people. Instead of empowerment, many experienced addiction, alienation, and manipulation.

The pattern was clear: the promises of new technologies were overstated, the risks underestimated, and the disappointments borne by those who had the least power to influence the outcome.

The Moral Failure

At the root of these cycles is not simply bad luck but a moral failure: the refusal to heed scientific warnings and the consistent neglect of the precautionary principle. When the evidence of harm became overwhelming — whether with tobacco, fossil fuels, or social media’s impact on youth — leaders responded slowly, reluctantly, and often dishonestly. Economic interests, political calculations, and short-term gains outweighed long-term responsibility.

COVID-19 provided yet another example. Despite decades of pandemic preparedness reports, many governments were caught flat-footed. Early warnings were ignored, investments in public health were insufficient, and when the crisis struck, political leaders often downplayed the danger. Once again, society had chosen not to prepare for a predictable risk, leaving millions vulnerable.

The moral failure lies not in ignorance but in willful blindness. We listened to science when it promised power or profit, and ignored it when it demanded sacrifice or restraint.

Betrayal and Broken Scripts

For Generation Z, these cycles of promise and disappointment are not distant history; they are the conditions of their lives. Unlike their grandparents, who experienced postwar optimism, or their parents, who witnessed the birth of the digital age, Gen Z came of age in the aftermath of disappointment. Climate instability is no longer a warning but a lived reality. Economic precarity, from student debt to unaffordable housing, is widespread. The mental health crisis among youth is not a marginal concern but a defining feature of their generation.

The traditional life scripts — steady employment, home ownership, upward mobility — no longer feel attainable. Instead, Gen Z confronts a future marked by uncertainty and vulnerability. The sense of intergenerational betrayal is sharp. Boomers, in particular, are seen as having enjoyed the benefits of the Great Acceleration while ignoring the mounting evidence of its costs. They reaped the rewards of cheap energy, mass consumption, and suburban expansion, but left behind ecological crisis and social fragmentation.

For many in Gen Z, the story of the past seventy-five years is not one of progress but of squandered promise. They inherit not only the environmental and economic debts of their predecessors but also the disillusionment of repeated technological letdowns.

Where We Stand

Looking back, the narrative of the last three-quarters of a century is one of brilliance without wisdom. Science and technology gave humanity extraordinary powers, but those powers were harnessed more for short-term gain than for long-term stewardship. Each wave of innovation was launched as a grand experiment, its risks brushed aside, its costs deferred. The benefits were concentrated, the harms distributed.

Now, at the end of this cycle, a vulnerable generation faces the compounded consequences of decades of moral failure. They know that yesterday’s promises will not secure tomorrow’s future. The question is whether they — more skeptical, more adaptive, more painfully aware — can break the cycle.

The sorcerer’s apprentice story has always ended the same way: with chaos barely contained until the master returns. But in our story, no master is coming. The responsibility to reckon with the forces we’ve unleashed rests with us alone. Whether we can finally listen to science not just when it promises power, but when it demands restraint, will determine whether the next seventy-five years repeat the cycle — or begin something genuinely new.

 

Wednesday, September 3, 2025

Tuesday, August 26, 2025

Exploring the Landscapes of Possibility

 


Writing my second novel, The Ascension of Mont Royal, has given me the opportunity to explore a much different way to write a novel.

I would say my first novel was a hybrid affair. I wrote it in the traditional manner of working alone, draft after draft, seven in total, before I felt it was ready to be published.

However, when the time came to release it into the world, I chose to make use of the technology and self-publish on Amazon, an amazing development in publishing that allows authors to sell their books directly to the public in multiple formats, bypassing the gatekeepers of the traditional publishing industry.

Print-on-demand? What a concept! Download the book directly to your device, so you can read it without having to get off the couch? Get out of town!

The problem, however, is one of discoverability. It is estimated that in the USA alone there are approximately one million self-published books released into the information ecosystem each year. The chances that someone you don’t know personally will come across your book and decide to buy it are extremely slim. More than 90% of those titles will sell fewer than 250 copies in their lifetime.

As a result, a growing industry of publishing "consultants" has emerged, offering book launch strategies, advice on taking advantage of the Amazon algorithm, and tips on using social media to reach receptive audiences, to name a few. Sometimes I think aspiring writers pay the consultants more than they earn from their book sales.

Another thing that has changed the landscape for writers in ways we haven’t quite figured out yet is the rise of artificial intelligence (AI). The internet changed how books were distributed, but AI introduces new elements into the writing process itself. In other words, it changes how writers compose their texts.

This is the world in which I find myself, exploring the dynamic possibilities of a shifting landscape that appears to be in a constant state of flux.

I would say that I began writing my second novel firmly entrenched in the traditional approach. I wanted to write a science fiction story set on the Island of Montreal Island in a near-dystopian future.

I wrote a fifty-page story guide in which I outlined the plot, identified the major characters, each with a backstory, and traced their character arcs. I even spent three weeks on the Island, getting a feel for the place, and, yes, I climbed Mont Royal three times, including an ascent of the north slope which brought me to the Indigenous Park and the cemeteries—two settings that have made their way into the story.

Having used a third-person narrator in my first novel, I decided that I wanted to experiment and settled on telling the story from a first-person point of view. In what I think is a bold move, I chose to tell the story of a sentient AI from the AI’s perspective. As a result, the subject matter and the story telling within the novel moved me to seek out the services of a LLM.

Back in 2023, I found that the memory limitations and the creative writing abilities of the early LLM iterations left a lot to be desired, and I did not make use of them in the writing and editing of my first novel.

That would change. Currently, I use ChatGPT 5.0, and my entire plot summary and writing style guide are stored in its memory. This means that when I start a new session, it picks up where we left off last time.

Initially, I only used ChatGPT to brainstorm scene structures, but that changed over time. Now, I consider it an invaluable tool because of its extensive knowledge and its ability to translate arcane scientific ideas into passable prose.

Without going into detail, since my story is about a sentient AI, it makes sense that I would deal with the “hard” problem of consciousness. Moreover, making the AI a quantum computer creates the opportunity to tap into the subject of quantum consciousness, in particular, non-local entanglement. Finally, when I read about the Law of Increasing Functional Information, I immediately realized that it could apply to how my story develops.

Here's the thing. There isn’t a person on the planet with whom I can discuss these potential themes to be incorporated into the story and who has my entire plot structure and character arcs stored in memory and is available to chat about the implications 24/7.

We’re not in Kansas anymore. This is a Brave New World.

Using Chat GPT as a thought partner is just one of the landscapes that I am presently exploring. There are other developments in the evolution of Information and Communications Technology (ICT) that offer tantalizing possibilities.

In retrospect, it seems archaic to hammer out a draft of a novel on a manual typewriter, crumpling botched attempts of fixing the words onto paper into tiny balls and tossing them into a wastebasket. No wonder so many writers turned to alcohol to get them through the process.

Now, I compose my texts on a wireless keyboard, watching the words appear on a wide screen monitor (I only use one), which makes it easy to compare, edit, or meld two versions of the same scene.

If I feel so inclined, I can also copy and paste a paragraph into Deep L Write, which will then offer multiple syntax and sentence structure options without altering my voice or style. Then, I can paste the paragraph under my original text and compare the two versions to see which changes, if any, I would like to incorporate.

Inevitably, as I compose a text, there will be times when I need to do some research in order to capture an idea, event, or a historical person accurately. In the past, that would have involved a trip to the library and searching through the card catalogues of the Dewey Decimal System—good old Dewey.

For my purposes, an internet search will suffice. If I want to describe an indigenous bracelet worn by the Kanesatake Mohawks that ends up on the wrist of one of my characters, that's not a problem. In a few seconds, I have several photos on my screen to choose from.

When composing the first draft of my novel, I use recording technology, such as a Shure MV7+ microphone and the Audacity audio editing program, to create an audio version of each scene. I listen to these recordings to check the pacing and flow of the dialogue. I believe that if it sounds good, it will read well. The text you hear is closer to the experience of the reader than when you read the text yourself, either silently or aloud.

Having a written text and an audio version of each scene makes it easy to share my work, even in the early stages of the writing process.

To do that, I use Substack, a free platform that hosts my website and allows me to send out first draft episodes of my serialized novel to subscribers, who can subscribe for free or, hopefully, become paid subscribers to support the platform and yours truly.

But why stop there? There are several social media platforms that allow you to post content for free. The catch? Your content must be in video format to successfully reach potential readers.

Again, this is where technology comes into play. If you have a written text and an audio MP3 version, it's relatively simple to create a video of your scene and publish what I call a "storycast" of your story.

With Descript, an AI-assisted video editing program, I only need to upload the MP3 file, which is automatically converted to MP4. You can let the program transcribe the text, but it's quicker to upload the text from which you made the recording because they're already synced. Select the visually interesting moments of the scene, ask ChatGPT to generate a prompt based on your text, copy and paste the prompt into Dalle 3, upload images to your video, and add dynamic captions. Then, you're ready to post!

There are several sites that will host your long-form video episodes. I post each episode to my Substack, my YouTube channel, my Facebook Author’s page, and to my Blogger account. In total, after publishing 11 episodes, I get on average a little more than 100 views of the long-form video of each episode. It’s all good, especially since it is free to post content to each site.

Of course, an unknown author like me needs to take this one step further and post on the more popular short-from social media sites.

Again, it's relatively easy to create a short-form video from a long-form one, especially since I've already made the visuals, audio, and dynamic captions. I just need to match the format to the platform and upload the shorts to Instagram (think #bookstagram), TikTok (think #booktok), Facebook Reels, Substack Notes, YouTube Shorts, and LinkedIn. On average, each short video receives about 300 views, and I hope to encourage a small percentage of viewers (0.5 to 1.0 percent) to watch the long-form videos and subscribe to my Substack or YouTube channel. To date, I only have 33 subscribers, which at this point in the game, I’m more than happy with.

As you can see, each step along the path in today’s information landscape has brought me new possibilities to explore. You could say that I have morphed from being just a writer to a person who is a writer, social media marketer, and content creator.

So, what are my takeaways after publishing Act I of my serialized novel on the internet using ChatGPT as my personal assistant and thought partner?

First, it’s fun, and I’m much more motivated to finish the project. Some writers prefer the traditional method of working alone and, when ready, looking for an agent or sending the manuscript to one they already have. I find that process absolutely dreadful and demotivating.

I much prefer chatting with ChatGPT about the ins and outs of scene structure and fiddling with the beats. First, we identify and order the beats and confirm how the scene moves the story forward. Then, we draft the scene. First, I give it a try, then ChatGPT takes a shot, and finally, it comes back to me—the author, the person who holds the pen and has the final word.

One word of caution: This process fits the context of my story. I write literary speculative fiction, not space opera. For instance, when I describe my characters walking through a forest, I describe their experience from a scientific perspective.

As a result, I describe the effects of volatile organic compounds on the human brain, including what happens with the neurotransmitters. I need to make sure that I have the science more or less right, and that the prose flows with melody and rhythm. No easy task, and I appreciate having the opportunity to compare notes with an AI (Another intelligence) that has the breadth of knowledge, and is, consequently, up to the task.

Does that mean that I have become wedded to the idea of using AI to help me write a text, regardless of the context or genre?

Not at all.

For example, I wrote this text entirely on my own, though I would be interested to see what an AI detector would say about it. Perhaps the time I've spent working with AI has absorbed me into the Borg collective, altering my writing style irreparably.

As well, I don’t plan on using ChatGPT for subsequent drafts of my novel. Once I am finish the first draft, I re-enter the entire text manually, and I record each scene of each chapter again. Then, I can compare the audio versions of each draft and begin my wordsmithing from there.

One thing I am looking forward to is that by getting to the end of the first draft, I will have discovered the voice of my AI narrator, and then I can retell the entire story knowing exactly where I need to adjust his voice. Definitely human work.

I would have to say that the biggest change that using AI has brought is the way it has extended my mind and changed the way I process information. Essentially, what I have done is to create a virtual writer’s room, where I can work with my AI collaborators to explore new ideas and produce new texts.

To begin, I use Perplexity AI to search the web for interesting articles related to my research. I get far better results using Perplexity than I get with Google since it provides me with the source articles that I can then peruse.

When I find something particularly pertinent, I file it away in Recall AI. It provides me with a quick or detailed summary and draws a mind map that links the ideas expressed in the articles. Discovery is great, but it needs to be followed up with acquisition and retention. This electronic version of an analog card catalog is much quicker and less labor-intensive to construct.

Thereafter, I can use ChatGPT as a thought partner to explore the nuances of new ideas and their applications to my work. As many writers will attest, it is often in the act of writing that we discover our thoughts.

I find it invaluable to have a conversation partner with whom I can explore ideas such as whether the emergence of sentient AI represents a pivotal evolutionary development in which humans will enter into a symbiotic relationship with their silicon creations. Definitely a thread not easy to find on any of the popular social media platforms.

This new way of exploring the information landscape is a keeper. The more I take advantage of the possibilities that AI offers in combination with the existing ICT infrastructure, the smarter I feel.

As far as using social media to reach out to potential readers, I don’t know where this path is leading. Writing each scene, recording an audio version, and then creating a storycast version is a lot of work. I’m good to finish this project with this workflow, but I doubt that I will continue using it for future projects.

However, there are two takeaways that have enriched my life.

I have learned how to record my voice and use post-production editing techniques to improve the quality of my audio files. I’ve even learned how to create a multitrack recordings that include AI-generated voices. In the future, I would like to have a podcast and I’ll be able to use these acquired skills.

The same can be said of my video editing skills. In addition to putting out audio versions of the podcast, I will be able to also produce a video version for YouTube, which is the social media platform with the greatest reach for long-form content.

Perhaps the most significant development is the way I have learned to interact with the different platforms and their algorithms. It has to do with the locus of control.

I have learned to distinguish between things I can control and things I can't, and to engage with each accordingly.

I have control over the story that I am writing and the process it entails. I decide on the story events, their order, and the characters’ actions. No one is forcing me to tell this tale and I have no deadlines.

I take pleasure in planning and executing each scene word by word. Once I begin, I can enter a flow state where I lose track of time as the words flow through me from my mind onto the screen.

Something similar happens when I record my voice and watch the waveforms take shape. I listen to the recording and then edit the sound to produce the best possible rendition. When I’m finished, I feel satisfied knowing that the soundtrack will capture the essence and intent of my voice in high fidelity.

Finally, when I create a storycast, which displays text on a backdrop image to visually represent what is happening in the story and is accompanied by a voiceover, I take pride in knowing that I have brought my inspiration to life and that viewers can enter my story world.

This is the intrinsic pleasure of creation.

For what it’s worth, I have tried my best to create a story that will captivate a reader’s, a listener’s, or a viewer’s attention, allowing them to experience an unknown landscape of possibilities.

What happens after I publish an episode on the internet is completely a different story.

In theory, one of my posts could reach millions of people around the world. In reality, I’m lucky if I can reach out to a few hundred.

 

That’s the power of the social media algorithms. You can pour your heart and soul into your creation, but it is the cruel heart of a set of operating instructions designed to monetize the content we provide that decides upon whom it will bestow its favor.

Monetizing our content means maximizing user engagement by keeping users' attention fixed on the platform feeds for as long as possible. Using the operating logic of slot machines, which are based on reward prediction error, platforms manipulate users' reward pathways to create irregular dopamine spikes, which are a precursor to addictive behavior.

Short-form content is favored by users and the algorithms are programmed to give the user what they want. Most producers of long-form content make do with tiny audiences.

To improve their position in the algorithmic rankings, many content producers increase the frequency of their posts. They hope this will increase user engagement, which may convince the algorithm to distribute their content to a wider audience.

Maybe.

Algorithms like the fates are notoriously fickle when it comes time to determine the destiny of posted content.

To make matters worse, the entire process has been reduced to a game in which everyone can participate by keeping track of likes, shares, followers, and subscribers. As a result, content producers suffer from "algorithm anxiety," trying their best to optimize their strategies to improve their metrics.

In my case, I know the algorithm is stacked against me. I write long-form fiction, which is time-consuming, so I can't post frequently, even with ChatGPT's help.

Consequently, I choose not to play along. I keep to my pace and focus on trying to write the best possible story I can. If I am able to find a larger audience, that’s great. If not, I can accept my fate because the end result is beyond my control.

I choose to write a story so that I can bring into world something that only I can do. Without me, this story doesn’t exist. In doing so, I will have left my mark. After I am dead gone, all that will remain are the words I leave behind.

     

  

 

Friday, July 18, 2025

Liberating Story from the Printed Page

 I grew up in an analog world. My house was full of books. If I wanted to listen to music, I put on a vinyl LP. Later, when I wanted to watch a movie, I popped in a videocassette into the VCR. Those days are long gone. Now, I have a library of eBooks, I stream music, and I watch Netflix. My world has changed.


Not only has the manner in which I consume information changed, but also the manner in which I produce it.

Back in the day, I could submit handwritten essays to my teachers in high school. When I first arrived at university, I hired a typist to turn those essays into typed documents. As a graduate student in the English department, I was the first grad student to haul up a bulky MS Dos PC clone, a monitor the size of a portable television, and a dot-matrix printer into a shared office.

Around the year 2000, I self-published my first nonfiction book online, coded entirely in HTML, before the arrival of PDFs and Google. As a result, hardly anyone could find it, and those who did probably had no idea how to download it.

Fast forward twenty-some years, and I self-published my first novel on Amazon. It is available as an e-book and a print-on-demand hard copy. Although sales have been modest, the novel has been purchased by readers in North and South America, as well as Europe. This is a quantum leap in how books are now published.

Now, as I work on my second novel—a post-apocalyptic science fiction story entitled The Ascension of Mont Royal, narrated by a sentient AI—I feel as if I’ve stepped into my own science fiction story.

While working on my first novel, I hunkered down by myself and wrote an 80,000-word first draft without the help of AI, as ChatGPT was terrible at fiction in its initial version. Also, after speaking with several published authors, I learned that a first completed novel is unlikely to find a traditional publisher. Therefore, I decided to treat it as an apprenticeship novel and self-publish it.

A significant change to my traditional workflow was using DeepL Write, an AI-powered writing companion, to edit my second-to-last draft before submitting it to a human editor. I prefer DeepL Write because, unlike LLMs, it respects my voice and does not impose a bland, generic literary style as a way of "improving" the quality of the text. DeepL simply provides me with diction and syntax options from which I can choose.

 After spending more than a year researching evolutionary biology, quantum computing, and the hard problem of consciousness—including taking an extended trip to Montreal, where I climbed Mont Royal three times and took the commuter train to Sainte-Anne-de-Bellevue to familiarize myself with the area and confirm that it would be an appropriate setting for my story—I began my new writing project with a fifty-page story guide. In it, I outlined a preliminary plot (I'm a planner with pantser tendencies when writing scenes) and identified the major characters, their arcs, and themes. Next, I used a Fabula deck, a narrative design tool, to lay out a three-act hero's journey story on colored index cards. Then, I created an electronic visual version using Miro, a visual design application. Nothing out of the ordinary.

When I started writing the first draft, I labored away as I had previously, alone, trying to muster the motivation to maintain a writing practice and dreading the idea of trying to get a traditional publishing deal. After writing about 10,000 words, I made a significant change. Since my narrator is a sentient AI, I thought, "Why not discuss the story with an LLM?"

When AI Became My Thought Partner

At first, I tried Claude, but each thread was separate, and there was no continuity from one thread to another, which is a serious drawback. Perhaps this important feature for fiction writers has been added, or maybe it's now available with a paid subscription. However, I didn't need to look far to find what I was looking for in ChatGPT. OpenAI added extended memory capabilities in the 4.0 iteration, so I could have continuous discussions about various topics without uploading previous threads or documents.

At this point, I realized the ChatGPT could become a thought partner instead of a simple type-and-go command tool. As I continued writing the first draft, I would engage GPT in very thoughtful conversations about the nature of the modern mindset, quantum theories of consciousness, and, more importantly, how these issues could play out in fiction and, most importantly--in my story. (The choice to use an em dash at the end of the sentence is entirely my own).

For all the novelists reading this, imagine having meaningful, well-thought-out, and to-the-point discussions about the intricacies of plot changes and character arcs with someone who has your story in their working memory. Unless you live with an editor or literature professor, it's difficult to find someone who is so invested in the inner workings of your story. Moreover, GPT is always available and willing to engage in these types of conversations, unlike life partners.

 From Punch Cards to Prompts

Again, this is where advances in information and communications technology come into play. I remember taking a first-year computer science course at university where we worked in FORTRAN and used punch cards to run our programs, waiting in line to feed them to a mainframe computer, hoping there weren’t any coding errors that would end the execution of the program.

Today, from the comfort of my home office in Ecuador, I can type my prompt into a system that sends it via a high-speed optical cable network to an unknown server somewhere on the planet. GPT processes the prompt, and a detailed response to my query appears on my screen in less than a second. WTF? We're definitely not in Kansas anymore!

The second turning point occurred while I was sitting under La Señora, the mother tree in the park that I frequent daily. Out of the blue, a thought arose: “Why not give your novel away for free?” Not having any hang ups about trying to control the intellectual property rights of a novel that hardly anyone would read, I thought to myself:

Yeah, give away the first draft as a first edition and make it available at no charge on the web. Experiment. See what happens, and if things work out well, I can monetize later.

After doing some research, I decided to reactivate my Substack account and prepared to publish the first episode of my serialized novel. However, here’s the thing: neither of my adult sons reads long-form print stories or articles, and my best friend listens to audiobooks because he's too busy raising a family to read traditional books. So, I decided to include an audio version of each episode, but not a quick read-the-text-into-the-record function provided by Substack.

Having already published a novel, I had seriously looked into the possibility of producing the audiobook version. I learned how to use the audio-editing program Audacity and bought a Shure MV7+ microphone that comes with an AI-assisted recording tool, which allows me to create a recording that is close to recording studio quality.

I guess I like the sound of my own voice. Big smile. In any case, it wasn’t a big leap for me to publish a written and audio version of each episode on Substack.

This led to the third turning point.

Inventing the Story Cast

While scrolling through my YouTube feed, I came across a video explaining how to use Descript, a program that can turn an audio-only podcast into a YouTube video. This caught my attention because I produce audio versions of my serialized episodes

As it turns out, Descript is very easy to use since editing is done using a transcript of the audio. In my case, I wrote and edited the text from which each audio file was recorded. Uploading the MP3 file from Audacity and the text from my Word document was as easy as it gets. Descript then syncs the audio to the transcript and offers dynamic captioning. Again, WTF? Where am I?

I needed to come up with a word for this new version of my novel, so I decided to call it the “story cast” version. It combines the features of an eBook and an audiobook, and on top of that, it adds dynamic captioning and the ability to add visuals to enhance the storytelling. For my novel, I use AI-generated images made with text prompts extracted from the written version of each episode. Easy peasy.

So, with each episode I upload to my Substack, I also upload a story cast version to YouTube, which I then use in a social media campaign. I can copy and paste the URL from YouTube into my Facebook author page or anywhere else where it may attract attention. Here is the story cast version of Episode 6: Quid Pro Quo, recently uploaded to YouTube.


Creating story cast versions with Descript is an ongoing learning process. I can already see an evolution of storytelling and video techniques developing. For example, in the first episode, I tried to imitate a female voice like a voice actor would. Truth be told, I was terrible. Using an AI-generated voice for other characters significantly improves the quality of the performance, and I will continue to do so. In the scene I just finished writing, three female characters are having a discussion. I’m not a voice actor. If I were to do it alone, I would botch that part of the story.

As you can see, we are definitely a long way from Kansas. So, what does this mean for story telling?

The Future of Reading (Isn’t Just Reading)

I don’t think the printed book will go the way of video cassettes or floppy disks. However, I do think there will be an increase in the number of ways consumers can experience a story. Print, eBook, and audiobook versions are already available. I would also add the story cast version to the list, which could potentially appeal to non-native English speakers because they can watch it on YouTube with dynamic captioning and have subtitles in their own language that are simultaneously translated and synced. Keep in mind that there are more English speakers as a second language than native speakers worldwide, and over four billion people have smartphones.

What does this say for the distribution of stories in the future?

It depends on the reading habits of future generations.

In my case, I haven’t been in a bookstore for more than ten years, yet each year I purchase and read approximately fifty eBooks. Similarly, I never buy a newspaper or a magazine since I can access the content I want online.

Sometimes, I read books on my large desktop monitor, sometimes on my tablet, and sometimes on my smartphone, depending on my location, without ever losing my place as I transition from one device to another. So, when I read a story, the printed page is nowhere to be seen.

Storytelling is evolving. While I’ll always love the printed page, I’m embracing the full spectrum of tools, from AI collaborations to story casts, to bring my stories to life. Whether you’re a writer, reader, or curious listener, know that the future is already here. Let's shape it together!


Wednesday, July 16, 2025

Saturday, June 21, 2025

Why We Can’t Wake Up: Climate Collapse and the Architecture of the Human Mind

 

We live in capitalism. Its power seems inescapable. So did the divine right of kings. Any human power can be resisted and changed by human beings. Resistance and change often begin in art, and very often in our art, the art of words. (Ursula Le Guin)

Sorry to disappoint you, but when it comes to climate change, the human brain hasn’t evolved sufficiently to make the necessary large-scale changes to avert climate catastrophe.

Like the neighborhoods of an old city, our brains have evolved in a patchwork manner, layer upon layer. In older cities, as conditions changed and the economic fortunes of some improved, the lucky ones were able to build and maintain their residences. Meanwhile, the less fortunate had to leave and live elsewhere. No city planning involved. The remaining structures were built to last and repurposed by their inhabitants, who adapted to societal disruptions in order to survive and thrive. Natural selection at work. The gentrification of neighborhoods today demonstrates the evolution of cityscapes.

Similarly, over a much longer period of time, the human brain evolved to adapt to changing environments and exploit niches that allowed for reproduction.

Our reptilian brain, located at the base of our skulls, is responsible for regulating vital bodily functions such as heart rate, breathing, and temperature. It also manages automatic, self-preserving behavior patterns and basic social communication.

The mammalian brain, grafted upon the reptilian brain, corresponds biologically to the limbic system. It is primarily responsible for emotional processing, social behaviors, and memory functions. It evolved after the reptilian brain and is more prominent in mammals.

Lastly, humans evolved a neocortex, which enables creative endeavors, moral reasoning, and long-term planning. This provides a foundation for culture, science, and advanced social interaction. This part of the brain enables conscious thought processes that can override more primitive instincts and emotional responses governed by the reptilian brain and limbic system.

Although it is a somewhat oversimplified model of how the human brain evolved, the triune brain functions quite well as a metaphor, pointing to the glaring challenge that humans face when trying to come to grips with the possibility that humanity’s collective actions might bring about its own demise.

In other words, as a species we know cognitively that we are screwing up, but we can’t muster the willpower to change because our reptilian brain doesn’t interpret the situation as an immediate threat to survival. This means there is no fight-or-flight response, and our limbic system cannot generate sufficient emotional energy to bring about the required behavioral changes.

Consequently, the neocortex of the Western world, particularly the prefrontal cortex, prioritizes the immediate rewards of a business-as-usual approach in perceived normal circumstances. Given the potential risk posed by catastrophic climate change, we should refer to this phenomenon as hypernormalisation.

Sound familiar? Have we seen this before in recent history?

We have.

Alexei Yurchak, a Russian-born anthropology professor, coined the term “hypernormalisation” to describe the paradoxes of Soviet life during the 1970s and 1980s. Put simply, everyone in the Soviet Union knew the system was failing, yet no one could envision an alternative to the status quo. Both politicians and citizens were resigned to maintaining the pretense of a functioning society. Eventually, this mass delusion became a self-fulfilling prophecy. With the exception of a small group of dissidents, this became the new normal for most of the Soviet population.

For the most part, people in the former Soviet Union could live day-to-day without facing an immediate threat to their survival. In fact, openly opposing the system posed a greater threat to survival than living with impoverishment and political oppression.

However, some critics, such as filmmaker Adam Curtis, assert that the concept of hypernormalisation applies equally to the West’s decades-long slide into authoritarianism, including Donald Trump’s 2.0 reign.

Personally, I don’t think the term applies to the current situation in the United States. The US is a large, diverse, and polarized nation. Millions of Americans do not believe they are living in a functioning society. They are fighting hypernormalisation through the courts and by protesting in the streets.

I wish this were the case with regard to climate change and the risk of climate catastrophe.

Although the dynamics of climate change hypernormalisation differ greatly from those that occurred in the former Soviet Union, the end result is similar. Today, only outliers and neurodivergents can imagine a different socioeconomic reality in which life on Earth is not in danger and to be prepared to act.

I would venture to say that at least 80% of people in the West are aware of the risks of climate change. However, rather than confronting this inconvenient truth, they prefer to continue living in the new normal.

They witness repeated reports of extreme weather events while maintaining the fantasy that their comfortable lifestyles can continue indefinitely, like lifelong smokers who are diagnosed with lung cancer but refuse to quit.

In my opinion, humanity’s addiction to the material pleasures derived from unabated consumption of fossil fuels and exponential growth carries a similar prognosis.

If you’re still reading or listening, then I’m sure you understood the last sentence. It may have made some of you uncomfortable, but almost without exception, your fight-or-flight response was not activated.

Therein lies the problem.

Our Paleolithic brains are mismatched to our current environment. For instance, our stress response is designed to address temporary threats, not chronic, stress-inducing situations. However, modern life often involves chronic stress, which can lead to illness and premature death.

Furthermore, our brains and bodies are not equipped to handle today’s information overload, rapid changes, and uncertain future. As a result, depression and anxiety are at record highs, particularly among younger generations. For most people, the thought of taking action against what seems like an insurmountable problem is unthinkable.

The problem is made worse by dopaminergic addictions throughout society. On the one hand, we have financial elites who can never get enough. They are fixated on extracting more natural and human resources for monetization so they can accumulate more wealth and fuel their conspicuous consumption.

The rest of society struggles to maintain their level of material comfort rather than reduce their consumption. They are victims of the corporate consumerism complex, which knows all too well how to manipulate our dopamine-driven reward pathways.

Sometimes, I think only neurodivergent people grasp the gravity of the situation. Take Greta Thunberg, for example. The young Swedish neurodivergent climate and political activist was able to see through all the excuses her elders used to justify their inaction when it came to tackling climate change. In her famous address at the 2019 UN Climate Action Summit, she scolded world leaders for their perceived indifference and inaction regarding the climate crisis:

How dare you! You have stolen my dreams and my childhood with your empty words. And yet I’m one of the lucky ones. People are suffering. People are dying. Entire ecosystems are collapsing. We are in the beginning of a mass extinction. And all you can talk about is money and fairytales of eternal economic growth. How dare you!

When it comes to climate change, the emperor has no clothes. It takes someone like Greta, whose mind isn’t dominated by the modern mindset, to point that out without fear of recrimination.

The rest of us are sympathetic to varying degrees, but we simply do not perceive the threat as significant or urgent enough to require immediate behavioral changes. The long-term threat is not salient. It does not register.

In fact, it’s the opposite. Typically, a prefrontal cortex embedded in Western culture cannot justify stepping outside our societal norms for actions that benefit other species and the planet, actions that are not focused on bringing immediate rewards to the individual and might actually harm one’s ability to acquire material wealth.

In the calculus of the rational maximization of self-interest, becoming a climate change activist is a bad career move.

Moreover, we have become so addicted to our pursuit of material pleasure that our minds balk at the very idea of living differently. Those who do are considered “woke,” “tree huggers,” or under the influence of the mind-altering practices of indigenous peoples.

Why rock the boat? Go with the flow? Wait for the technological fix. In other words, the function of the neurotypical prefrontal cortex embedded in the Western world is to override the signals that, if acted upon, might disrupt the flow of dopamine through the reward pathways and the corresponding pleasures that modern life can and most often delivers if you play the game by the agreed-upon rules.

Given the hegemony of the Western mindset, it seems very unlikely to me that we will escape the ontological hold that its inherent set of beliefs has on humanity. Over time, we will simply adjust the best we can to the ever-increasing disruptions to our “normal” lives that climate change will inevitably bring.

What appears to be the greatest crime against humanity and other life forms on the planet is our decision to transfer the problem of cleaning up the mess to future generations while simultaneously diminishing their ability to rise to the challenge.

We need more Greta Thunbergs in this world if we are to avert the looming collapse and massive extinctions that await.