Brownstone » Brownstone Journal » Government » How a Techno-Optimist Became a Grave Skeptic
How a Techno-Optimist Became a Grave Skeptic

How a Techno-Optimist Became a Grave Skeptic

SHARE | PRINT | EMAIL

Before Covid, I would have described myself as a technological optimist. New technologies almost always arrive amid exaggerated fears. Railways were supposed to cause mental breakdowns, bicycles were thought to make women infertile or insane, and early electricity was blamed for everything from moral decay to physical collapse. Over time, these anxieties faded, societies adapted, and living standards rose. The pattern was familiar enough that artificial intelligence seemed likely to follow it: disruptive, sometimes misused, but ultimately manageable.

The Covid years unsettled that confidence—not because technology failed, but because institutions did.

Across much of the world, governments and expert bodies responded to uncertainty with unprecedented social and biomedical interventions, justified by worst-case models and enforced with remarkable certainty. Competing hypotheses were marginalized rather than debated. Emergency measures hardened into long-term policy. When evidence shifted, admissions of error were rare, and accountability rarer still. The experience exposed a deeper problem than any single policy mistake: modern institutions appear poorly equipped to manage uncertainty without overreach.

That lesson now weighs heavily on debates over artificial intelligence.

The AI Risk Divide

Broadly speaking, concern about advanced AI falls into two camps. One group—associated with thinkers like Eliezer Yudkowsky and Nate Soares—argues that sufficiently advanced AI is catastrophically dangerous by default. In their deliberately stark formulation, If Anyone Builds It, Everyone Dies, the problem is not bad intentions but incentives: competition ensures someone will cut corners, and once a system escapes meaningful control, intentions no longer matter.

A second camp, including figures such as Stuart Russell, Nick Bostrom, and Max Tegmark, also takes AI risk seriously but is more optimistic that alignment, careful governance, and gradual deployment can keep systems under human control.

Despite their differences, both camps converge on one conclusion: unconstrained AI development is dangerous, and some form of oversight, coordination, or restraint is necessary. Where they diverge is on feasibility and urgency. What is rarely examined, however, is whether the institutions expected to provide that restraint are themselves fit for the role.

Covid suggests reason for doubt.

Covid was not merely a public-health crisis; it was a live experiment in expert-driven governance under uncertainty. Faced with incomplete data, authorities repeatedly chose maximal interventions justified by speculative harms. Dissent was often treated as a moral failing rather than a scientific necessity. Policies were defended not through transparent cost-benefit analysis but through appeals to authority and fear of hypothetical futures.

This pattern matters because it reveals how modern institutions behave when stakes are framed as existential. Incentives shift toward decisiveness, narrative control, and moral certainty. Error correction becomes reputationally costly. Precaution stops being a tool and becomes a doctrine.

The lesson is not that experts are uniquely flawed. It is that institutions reward overconfidence far more reliably than humility, especially when politics, funding, and public fear align. Once extraordinary powers are claimed in the name of safety, they are rarely surrendered willingly.

These are precisely the dynamics now visible in discussions of AI oversight.

The “What if” Machine

A recurring justification for expansive state intervention is the hypothetical bad actor: What if a terrorist builds this? What if a rogue state does that? From that premise flows the argument that governments must act pre-emptively, at scale, and often in secrecy, to prevent catastrophe.

During Covid, similar logic justified sweeping biomedical research agendas, emergency authorizations, and social controls. The reasoning was circular: because something dangerous might happen, the state must take extraordinary action now—action that itself carried significant, poorly understood risks.

AI governance is increasingly framed in the same way. The danger is not only that AI systems might behave unpredictably, but that fear of that possibility will legitimize permanent emergency governance—centralized control over computation, research, and information flows—on the grounds that there is no alternative.

Private Risk, Public Risk

One underappreciated distinction in these debates is between risks generated by private actors and risks generated by state authority. Private firms are constrained—imperfectly, but meaningfully—by liability, competition, reputation, and market discipline. These constraints do not eliminate harm, but they create feedback loops.

Governments operate differently. When states act in the name of catastrophic prevention, feedback weakens. Failures can be reclassified as necessities. Costs can be externalized. Secrecy can be justified by security. Hypothetical future harms become policy levers in the present.

Several AI thinkers implicitly acknowledge this. Bostrom has warned about “lock-in” effects—not just from AI systems, but from governance structures created during moments of panic. Anthony Aguirre’s call for global restraint, while logically coherent, relies on international coordination bodies whose recent track record on humility and error correction is poor. Even more moderate proposals assume regulators capable of resisting politicization and mission creep.

Covid gives us little reason to be confident in that assumption.

The Oversight Paradox

This leads to a troubling paradox at the heart of the AI debate. If one genuinely believes advanced AI must be constrained, slowed, or halted, it is governments and transnational institutions that are most likely to hold the power to do so. Yet these are precisely the actors whose recent behavior gives the least confidence in restrained, reversible use of that power.

Emergency framing is sticky. Authority acquired to manage hypothetical risks tends to persist and expand. Institutions rarely downgrade their own importance. In the AI context, this raises the possibility that the response to AI risk entrenches brittle, politicized systems of control that are harder to unwind than any individual technology.

The danger, in other words, is not only that AI escapes human control, but that fear of AI accelerates the concentration of authority in institutions already shown to be slow to admit error and hostile to dissent.

Rethinking the Real Risk

This is not an argument for complacency about AI, nor a denial that powerful technologies can do real harm. It is an argument for broadening the frame. Institutional failure is itself an existential variable. A system that assumes benevolent, self-correcting governance is no safer than one that assumes benevolent, aligned superintelligence.

Before Covid, it was reasonable to attribute most technological pessimism to human negativity bias—the tendency to believe that our generation’s challenges are uniquely unmanageable. After Covid, skepticism looks less like bias and more like experience.

The central question in the AI debate is therefore not just whether machines can be aligned with human values, but whether modern institutions can be trusted to manage uncertainty without amplifying it. If that trust has eroded—and Covid suggests it has—then calls for expansive AI oversight deserve at least as much scrutiny as claims of technological inevitability.

The greatest risk may not be that AI becomes too powerful, but that fear of that possibility justifies forms of control we later discover are far harder to live with—or escape.


Join the conversation:


Published under a Creative Commons Attribution 4.0 International License
For reprints, please set the canonical link back to the original Brownstone Institute Article and Author.

Author

  • Roger Bate

    Roger Bate is a Brownstone Fellow, Senior Fellow at the International Center for Law and Economics (Jan 2023-present), Board member of Africa Fighting Malaria (September 2000-present), and Fellow at the Institute of Economic Affairs (January 2000-present).

    View all posts

Donate Today

Your financial backing of Brownstone Institute goes to support writers, lawyers, scientists, economists, and other people of courage who have been professionally purged and displaced during the upheaval of our times. You can help get the truth out through their ongoing work.

Sign up for the Brownstone Journal Newsletter


Shop Brownstone

Join the Brownstone Community
Get our FREE Journal Newsletter