
Earlier this year, a prominent company with millions of customers announced a major product upgrade—albeit with one little catch.
If this new product was released to the public, the company said, it could be used to disrupt—and perhaps destroy—civilizational infrastructure, from financial markets to transportation systems to power and water utilities.
But fear not! The company hastened to reassure the public that it had the situation under control. The company would decide, on its own terms, what the world needed to know, who should be called in to contain the problem, and how much gratitude the rest of us should feel for being spared a catastrophe we never knew was coming. No public accountability or government intervention required.
This, of course, is the story of Anthropic and its latest AI model.
Anthropic discovered that the model, known as Mythos, could autonomously identify zero-day vulnerabilities—that is, security flaws that software makers don’t know exist—across every major operating system and web browser. Some of the flaws Mythos found were decades old, overlooked and unnoticed by literally millions of human eyes. This was not an intended feature, but one that the AI seems to have picked up along the way, as Anthropic’s developers rushed to create a more powerful model with better reasoning and coding abilities.
Intentional or not, it introduced a substantial new danger to the world. In the wrong hands, Mythos could be a weapon fit for a supervillain—a cheat code for attacking the world’s most critical infrastructure.
And yet, the decision to build such an advanced model was not made by any external agency. No independent body evaluated it. No regulator was notified in advance.
And once the threat was identified, Anthropic decided—alone—what to do about it. After judging Mythos too dangerous for public release, Anthropic created a private consortium made up of handpicked partners like Amazon, Apple, Cisco, JPMorgan Chase, and Nvidia to fix the bugs and ensure Mythos’ safety.
With that all worked out, they gave policymakers and the public a heads-up on their dangerous new product and the plan to contain it.
This is what passes for AI governance in 2026: a single company accidentally builds an entity powerful enough to pose an existential threat to the digital systems that power modern life, unilaterally decides how to deal with it, and then loops in everyone else.
Except of course, it’s not at all clear that they’re dealing with it: A few weeks after all this transpired, we learn that Mythos was, in fact, accessed by unauthorized users. Was catastrophe avoided, or merely delayed? We may yet find out.
Mythos is the clearest evidence yet that our system for developing, assessing, and disseminating powerful AI systems is dangerously dysfunctional.
As tempting as it is to blame this dysfunction on bad actors or rogue tech CEOs, I think it’s something deeper than that: a broken incentive structure. As careless as their actions may sometimes seem, AI developers aren’t being intentionally malevolent—they’re rationally operating within a system that rewards chasing progress now and worrying about consequences later.
The leading AI companies, armed with billions in capital, are all sprinting toward the same horizon with an imperative to cross the finish line first. They all have the same motivation: “If I don’t build it, someone else will.”
That logic co-exists with a genuine belief that AI may prove to be a transformative force for good, generating productivity in unimagined new ways and pointing the way forward for progress. AI’s potential benefits have been exhaustively documented—whether to address climate change or to enhance medicine or simply to widen our horizons—but at this stage in the AI era, we all have to acknowledge that AI is accompanied by myriad harms, from job loss to manipulative engagement to cognitive offloading to AI psychosis to AI-assisted suicide and murder.
The scale of these numerous challenges demands a response as wide and deep as our society. One self-interested company or a hand-picked corporate consortium can’t be trusted to get it right—the issue is far larger than that. The solution, should we get there, will require public understanding and engagement, and government oversight.
To those that claim AI is too complex, too consequential, or too powerful to govern: you’re wrong. In reality, this argument is—at best—a shoddy defense of the broken incentive structure producing it.
Because AI is complex, we have a responsibility to comprehend it. And because AI is so consequential, we have a responsibility to govern it. Institutions, policymakers, and regulators have been understandably disoriented by the AI frenzy of the last few years, but now must rise above the noise and rewrite misaligned incentives. That means—yes—establishing a role for government in the AI sphere. Concerns about governmental efficacy are understandable, but government must be meaningfully engaged. There simply is no other manifestation of the will of the public.
We have governed consequential technologies before: automobiles, aviation, pharmaceuticals, nuclear energy, and more. Every one of these industries today operates inside a hard-won system of accountability—a system that took time to build but, crucially, did not kill innovation. It’s time to apply the same rules and accountability structures to AI, and with even more urgency, considering how quickly it is integrating into virtually every aspect of our society.
And the fact is, no meaningful federal regulation of AI currently exists. States have stepped up to fill the void, with 73 AI laws—ranging from protecting kids online to ensuring a human is in the loop when it comes to critical decisions like healthcare—enacted across 27 states in 2025. But states’ reach is increasingly limited, with Trump issuing an executive order in December directed against “excessive state regulation.” The tech industry, meanwhile, has worked to paralyze regulation at every turn, with AI companies pouring money into Super PACs to support tech-friendly candidates and block state regulatory laws.
So what could a meaningful regulatory structure actually look like, assuming the political will for it materialized? Let’s take Mythos as a test case.
Under a more rational governance framework, a tool with society-altering capabilities like that of AI would face mandatory pre-deployment testing by independent evaluators—not the company selling the product.
There would be standardized public reporting of risks, so that regulators, businesses, and users could make informed decisions rather than relying on what the developer chooses to disclose. There would be real whistleblower protections for employees inside AI labs who see something wrong and want to say so.
And if an AI product caused foreseeable harm after its release, the company that built and deployed it would bear legal responsibility. Liability is what aligns private incentives with public safety. It’s why cars have seatbelts and airbags—not because manufacturers wanted them, but because they knew they would pay the price for cutting corners and because insurers and legislators aggressively pushed safety measures. The same logic applies here.
These two principles—safety and transparency before deployment; and a genuine duty of care to the public—are key to establishing a framework for orienting policymakers, companies, and citizens towards what responsible AI actually requires.
None of this is radical. It’s all standard with existing products. And all of it is overdue.
Mythos is just the latest and most egregious evidence that we cannot keep relying on the judgment of individual companies to stand in for the public accountability structures we’ve so far refused to build around AI. The next threat may not be discovered in time. Or it might come from a company more desperate to succeed in an incentive structure that rewards reckless behavior.
We’ve done this before. We have the tools. It’s time we reclaim our future with principles that will protect us, individually and collectively.
Posted by AmericanPurposeMag
1 Comment
Submission Statement
AI companies are building machines that threaten to destroy humanity, but at the same time they are acting baffled that regulators want to restrict them.