The Pentagon, Anthropic, and The Battle For AI Control
Effective Altruists think they should set the rules for how governments use AI.
The Effective Altruism movement’s core premise is that you can calculate the objectively correct moral position through reason and evidence. That’s fine as a philosophical framework. The problem comes when people holding that framework think it gives them authority to override constitutional processes or impose their “calculations” on others. The Effective Altruist movement dominates the AI space, and it is inundated with executives who believe they have superior moral reasoning and should therefore have outsized influence over civilizational decisions. And that’s exactly what’s happening with Anthropic in its campaign to dictate its moral framework upon the government, and the Trump Administration’s pushback to that initiative.
After weeks of back and forth between the major AI company and the War Department over the use of its technology for certain applications, President Trump unleashed on Anthropic Friday afternoon:
We shall see if President Trump will follow through on that order to have all Federal agencies in the Government to stop using Anthropic's technology. President Trump, at his core, is a dealmaker. So this is likely the fiery opening salvo in a back and forth with Anthropic.
This comes on the heels of Secretary of War Pete Hegseth giving Anthropic an ultimatum: drop the AI restrictions for military applications by Friday, or lose your $200 million Pentagon contract and face designation as a supply chain risk under the Defense Production Act (DPA). The Trump Administration has also threatened to invoke the DPA to force Anthropic's compliance, though this may be more negotiating leverage than serious policy.
Anthropic CEO Dario Amodei, a man deeply connected to the Effective Altruist network, the same executive who left OpenAI over “safety concerns,” is now drawing red lines. The company insists it won’t budge on two issues: AI-controlled autonomous weapons systems and mass domestic surveillance of American citizens. They say that AI isn’t currently reliable enough to operate weapons autonomously, and there are no laws governing how AI could be used in mass surveillance.
Noble stance, right? The principled AI company standing up to the “military-industrial complex.”
Except there’s a problem with this narrative.
Just before this Pentagon showdown, Anthropic announced it was loosening its core safety framework, known as the Responsible Scaling Policy, which it had championed for two years, due to commercial competition. They admitted that maintaining strict “safety” standards would “hinder its ability to compete in a rapidly growing AI market.”
So when there’s money on the line from enterprise customers, safety standards are, in fact, negotiable. But when it comes to helping the American military maintain technological superiority over China, suddenly Anthropic discovers uncompromising principles.
This gets even more interesting when you examine who Anthropic is willing to take money from. The company just raised $30 billion, with investments from the Qatar Investment Authority and other Gulf state entities. These aren’t exactly bastions of human rights. The ruling family of Qatar is all too often facilitating the financing of terrorism, and it operates under Sharia law. But apparently, taking billions from authoritarian regimes raises no ethical red flags for the AI safety crowd.
These are the same people who claim ‘AI safety' requires them to censor your questions about controversial social issues, but taking money from Sharia law states raises no red flags.
Anthropic isn’t opposed to AI being used for morally questionable purposes. They’re opposed to AI being used for American military purposes.
The concern about “mass domestic surveillance” is more civil libertarian virtue signal than legitimate grievance. Of course, the War Department operates within a system of laws and constitutional checks and balances. Is our system perfect? Of course not. Yet the Pentagon is not asking Anthropic to permit use of its technology for unlawful purposes. So what Anthropic is really trying to do here is act as an additional governance layer on top of the government itself. Now you can see why someone like President Trump or Secretary of War Hegseth would interpret this as a challenge to their constitutional authority, because that’s exactly what it is.
As for AI-controlled autonomous weapons being “unreliable,” that’s a technical claim but also an attempt to carve out a moral framework for the future. Anthropic’s leadership roster is again trying to impose an additional standard for use of its products, one that does not apply to any other company that does business with the government.
Hegseth — and now Trump’s — ultimatum makes this very clear. The Pentagon isn’t asking Anthropic to build Skynet. They’re asking for access to AI tools that will help protect American service members and maintain our advantage over a Chinese adversary that is neck and neck with us in the AI Race.
Anthropic’s response reveals their actual priorities. When faced with a choice between potentially helping the American military and maintaining their status as the “ethical” AI company in the eyes of the tech world, they chose their reputation among tech elites. This is why Sam Altman, CEO of OpenAI, publicly aligned himself with Anthropic's position. He is signaling that his colleagues and acquaintances in the AI sector see themselves as morally superior to the Pentagon's authority, and that stepping into the void would not be helpful to OpenAI’s reputation. More than anything, this ties back to the reality that Big AI is inundated with Effective Altruists who believe that they are the most enlightened beings on the planet and should eventually become our masters of the universe.
This isn’t about safety or “responsible AI.” It’s a power play aimed at developing a moral and legal ruleset for the government and corporations that want to use Anthropic’s technology.
Effective Altruists believe they should set the rules for how governments use AI. They know full well that the Pentagon operates under constitutional constraints — unlike the Chinese military — and that judicial processes exist to challenge violations of Americans’ rights. This isn’t about protecting civil liberties. It’s a play to become the overlords of morality, positioning themselves as an authority layer above the elected government. The Trump Administration cannot allow unelected tech executives to claim that kind of power.
ive Altruist movement dominates the AI space, and it is inundated with executives who believe they have superior moral reasoning and should therefore have outsized influence over civilizational decisions. And that’s exactly what’s happening with Anthropic in its campaign to dictate its moral framework upon the government, and the Trump Administration’s pushback to that initiative.
After weeks of back and forth between the major AI company and the War Department over the use of its technology for certain applications, President Trump unleashed on Anthropic Friday afternoon:
We shall see if President Trump will follow through on that order to have all Federal agencies in the Government to stop using Anthropic's technology. President Trump, at his core, is a dealmaker. So this is likely the fiery opening salvo in a back and forth with Anthropic.
This comes on the heels of Secretary of War Pete Hegseth giving Anthropic an ultimatum: drop the AI restrictions for military applications by Friday, or lose your $200 million Pentagon contract and face designation as a supply chain risk under the Defense Production Act (DPA). The Trump Administration has also threatened to invoke the DPA to force Anthropic's compliance, though this may be more negotiating leverage than serious policy.
Anthropic CEO Dario Amodei, a man deeply connected to the Effective Altruist network, the same executive who left OpenAI over “safety concerns,” is now drawing red lines. The company insists it won’t budge on two issues: AI-controlled autonomous weapons systems and mass domestic surveillance of American citizens. They say that AI isn’t currently reliable enough to operate weapons autonomously, and there are no laws governing how AI could be used in mass surveillance.
Noble stance, right? The principled AI company standing up to the “military-industrial complex.”
Except there’s a problem with this narrative.
Just before this Pentagon showdown, Anthropic announced it was loosening its core safety framework, known as the Responsible Scaling Policy, which it had championed for two years, due to commercial competition. They admitted that maintaining strict “safety” standards would “hinder its ability to compete in a rapidly growing AI market.”
So when there’s money on the line from enterprise customers, safety standards are, in fact, negotiable. But when it comes to helping the American military maintain technological superiority over China, suddenly Anthropic discovers uncompromising principles.
This gets even more interesting when you examine who Anthropic is willing to take money from. The company just raised $30 billion, with investments from the Qatar Investment Authority and other Gulf state entities. These aren’t exactly bastions of human rights. The ruling family of Qatar is all too often facilitating the financing of terrorism, and it operates under Sharia law. But apparently, taking billions from authoritarian regimes raises no ethical red flags for the AI safety crowd.
These are the same people who claim ‘AI safety' requires them to censor your questions about controversial social issues, but taking money from Sharia law states raises no red flags.
Anthropic isn’t opposed to AI being used for morally questionable purposes. They’re opposed to AI being used for American military purposes.
The concern about “mass domestic surveillance” is more civil libertarian virtue signal than legitimate grievance. Of course, the War Department operates within a system of laws and constitutional checks and balances. Is our system perfect? Of course not. Yet the Pentagon is not asking Anthropic to permit use of its technology for unlawful purposes. So what Anthropic is really trying to do here is act as an additional governance layer on top of the government itself. Now you can see why someone like President Trump or Secretary of War Hegseth would interpret this as a challenge to their constitutional authority, because that’s exactly what it is.
As for AI-controlled autonomous weapons being “unreliable,” that’s a technical claim but also an attempt to carve out a moral framework for the future. Anthropic’s leadership roster is again trying to impose an additional standard for use of its products, one that does not apply to any other company that does business with the government.
Hegseth — and now Trump’s — ultimatum makes this very clear. The Pentagon isn’t asking Anthropic to build Skynet. They’re asking for access to AI tools that will help protect American service members and maintain our advantage over a Chinese adversary that is neck and neck with us in the AI Race.
Anthropic’s response reveals their actual priorities. When faced with a choice between potentially helping the American military and maintaining their status as the “ethical” AI company in the eyes of the tech world, they chose their reputation among tech elites. This is why Sam Altman, CEO of OpenAI, publicly aligned himself with Anthropic's position. He is signaling that his colleagues and acquaintances in the AI sector see themselves as morally superior to the Pentagon's authority, and that stepping into the void would not be helpful to OpenAI’s reputation. More than anything, this ties back to the reality that Big AI is inundated with Effective Altruists who believe that they are the most enlightened beings on the planet and should eventually become our masters of the universe.
This isn’t about safety or “responsible AI.” It’s a power play aimed at developing a moral and legal ruleset for the government and corporations that want to use Anthropic’s technology.
Effective Altruists believe they should set the rules for how governments use AI. They know full well that the Pentagon operates under constitutional constraints — unlike the Chinese military — and that judicial processes exist to challenge violations of Americans’ rights. This isn’t about protecting civil liberties. It’s a play to become the overlords of morality, positioning themselves as an authority layer above the elected government. The Trump Administration cannot allow unelected tech executives to claim that kind of power.







Holy Moly . . .the AI cabal totally untrustworthy and morally bankrupt. Great article!
The Effective Altruism movement’s core premise is that you can calculate the objectively correct moral position through reason and evidence. That’s fine as a philosophical framework. The problem comes when people holding that framework think it gives them authority to override constitutional processes or impose their “calculations” on others. The Effective Altruist movement dominates the AI space, and it is inundated with executives who believe they have superior moral reasoning and should therefore have outsized influence over civilizational decisions. And that’s exactly what’s happening with Anthropic in its campaign to dictate its moral framework upon the government, and the Trump Administration’s pushback to that initiative.