The Federal AI Moat: Anthropic Is Selling Regulatory Capture as "Bipartisan Wisdom"
Know the op when you see it.
In a New York Times op-ed published Monday, two former White House officials — Dean Ball, who had a short stint at the Trump White House and The Heritage Foundation, and Ben Buchanan, President Biden’s White House adviser for AI — joined hands “across the aisle” to deliver a supposed warning: artificial intelligence has become so dangerous that Congress must act now.
Their headline policy ask, parked unobtrusively near the end of the piece, is the giveaway. “Congress should mandate audits of A.I. developers’ safety claims and processes,” they write, “requiring that they be conducted by independent expert bodies overseen by the government.”
That single sentence is the whole game.
Halfway through the column, a parenthetical notes that “Dr. Buchanan is an outside adviser to Anthropic,” which pretty much reveals that this “op-ed” is a corporate memo from Anthropic itself, given that Ball too has deployed rhetoric that is indistinguishable from the company’s policy posture (and works for the Foundation for American Innovation, which has become Anthropic’s chief defender in its battle with the Pentagon). The articles central piece of evidence (the case study the authors use to argue AI now threatens the homeland) is a product announcement from Anthropic itself. Claude Mythos, we are told, found “thousands of critical vulnerabilities” in foundational software, and “in the wrong hands” could be turned against power grids, hospitals, and banks. So one of the authors is paid by the company whose marketing material he is using as a national-security threat assessment to justify federal regulation that, conveniently, his client is already structured to comply with.
You don’t have to be cynical to notice that The New York Times published what amounts to paid advertising in op-ed form.
Ball and Buchanan would have you believe their proposed audit regime is a modest, common-sense guardrail. It is not. Mandatory third-party safety audits of AI developers, conducted by government-blessed “expert” bodies of political animals, are a textbook regulatory moat. They impose fixed compliance costs that scale beautifully for incumbents and crush everyone else. Anthropic, OpenAI, and Google can absorb a compliance bureaucracy without breaking a sweat. A team of seven engineers in Miami or Austin building a startup competitor cannot.
I’ve spend enough time in D.C. to know that this is how it always works. In finance, in tech, in pharmaceuticals, in energy, etc, the firms that lobby loudest for “responsible regulation” are the ones already sitting on top of the hill. Once you are the incumbent, the most valuable thing the government can sell you is a fence. This op-ed is a polite request for that fence, and the “national security” rhetoric is an attempt to scare the holdouts into submission.
And who, precisely, would staff these “independent expert bodies overseen by the government”? AI is still a small field. The pool of credentialed and somehow unaffiliated experts capable of evaluating frontier model safety lives almost entirely inside the same handful of companies the audits would govern, or in the academic and think-tank ecosystem those organizations underwrite. The revolving door would be in big time play here, similar to how pharmaceutical regulatory bodies are always staffed with individuals who were placed there to protect specific corporate interests. Anthropic in particular has spent years branding itself as the “safety-first” AI company, marketing its researchers as the responsible adults in the room. A federal audit regime would formalize that positioning into law. It would take the company’s existing internal practices, declaring them the federal standard, and forcing every competitor to either adopt them or exit the American market.
Notice also the asymmetry baked into the piece. Bioweapon risk and cyber catastrophe are described in vivid, urgent prose. They talk about power grids going down, hospitals breached, virologists outclassed by new LLMs. The threats (though they remain entirely hypothetical) that justify expanding the federal role are said to be concrete and imminent.
The “bipartisan” framing is the ultimate tell in Washington. Ball and Buchanan note, as if it were profound, that they served different presidents and still agree. Yet both have an overwhelming personal and professional interest in expanded federal involvement, because they ultimately answer to the same stakeholders. When two former White House staffers from opposing administrations converge on a policy, that is not some kind of automatic sign the policy is correct. Far from it.
Innovation does not happen because Congress mandates audits. It happens because builders are free to build, fail, iterate, and try again, without first having to be approved by a government-credentialed compliance body funded, staffed, and culturally captured by the firms it is supposedly overseeing. This domestic audit regime they recommend is a moat, dressed in safety language, written by men with a financial and professional interest in the moat being built.
Anthropic wants its AI moat, and I don’t blame them for playing the D.C. game of making their case through an endless series of Matryoshka doll cutouts. This is a company that could very well IPO at a valuation of well over a trillion dollars. Nonetheless, when it comes to probably the most important technological innovation category of today and the coming decades, allowing for competition and an open playing field in AI will be far more beneficial to the American people as a whole.




There are few things more astonishing to me than the fact that the public, which can avail itself of all of history, including recent history, via the devices they all carry every day, still nods thoughtfully when someone stands up and says "what we need is government to be more involved!" People who stand up and say that today should be reflexively, overwhelmingly and passionately tarred and feathered within seconds of them opening their mouths by a public that has spent the last 20 years watching a huge bloated government systematically destroy the tech industry (once the purest of meritocracies, utterly color and sex-blind), healthcare, transportation, the list is endless.
Furthermore, even the "experts" at the AI companies, the ones actually creating these tools, are prone to falling head-first into their toolset and declaring it alive, conscious, etc. In other words, even the "experts" don't know what they're talking about, another thing the public should have noticed as a generality after the whole COVID debacle.
You don’t have to a cynic, but healthy skepticism is an asset. I’ve got a lot of respect for Dean Ball, but . . . follow the money.