As Israel and the U.S. wage an illegal war against Iran, the U.S. government’s deployment of AI on the battlefield has become a question of how quickly and extensively the U.S. military will use it to bomb the Iranian people. On one hand, a recent standoff between Anthropic and the Pentagon over guardrails on military uses of Anthropic’s Claude model has been cited to suggest that Anthropic is holding the moral high ground.[1] Because of Anthropic’s refusals to allow Claude to be used for domestic surveillance or fully autonomous weapon systems, indeed, the U.S. Department of War (DoW) in retribution designated Anthropic as a supply chain risk to national security.[2] On the other, OpenAI has swiftly stepped in and secured a coveted U.S. military AI contract, allowing its tool to be used for classified military work.[3] But it would be a mistake to see Anthropic as a moral standard-bearer for the “ethnical” use of AI.

Debating which companies are more ethical obscures the fact that they all operate subject to the underlying imperatives of a capitalist political economy. Anwar Shaikh’s theory of real competition is instructive here, as competition functions as “the central regulating mechanism” of capitalism.[4] On the battlefield of real competition, firms must deploy every tactic, strategy, and new technology to survive and grow. Firms are inherently antagonistic, driven by the relentless pursuit of profit and expansion. To lag behind for too long is to perish.

Of course, government regulation may qualify or even arrest competition for more or less extended periods in specific industries. This does not alter its role of as “the central regulating mechanism” across the political economy.

Anthropic and OpenAI are locked in fierce real competition as they race to secure massive amounts of capital to fuel their respective businesses. The two companies have adopted different strategies: Anthropic has largely focused on B2B enterprises, while OpenAI has prioritized the mass consumer market. OpenAI, which has raised $110 billion,[5] is struggling to generate revenue and is projected to burn through $115 billion in cash by 2029 due to massive spending on computer power and infrastructure. [6] Meanwhile, Anthropic – which has raised a total of $64 billion – continues to lead in the enterprise Large Language Model (LLM) market share and is aggressively pushing to integrate agentic AI in the workplace.[7]

Anthropic and OpenAI aren’t just competing against each other. The “magnificent seven” – Microsoft, Alphabet, Amazon, Meta, Nvidia, Apple, and Tesla – are also in this fight and are investing vast quantities of capital and building out massive infrastructures to make headway in the AI market.[8] In 2025, AI-related tech spending across the S&P 500 exceeded $1.25 trillion and the “magnificent seven” – which made up 37% of the S&P 500 valuation at their peak[9] – accounted for roughly 28% of that total expenditure.[10] Moreover, they’re all competing globally, particularly against Chinese AI companies (a topic that warrants its own discussion).

Under such intense competitive pressure, none of these companies can afford to pass up the most lucrative AI market of all: U.S. defense contracts, which alone are projected to reach $29.48 billion by 2035. Military spending has long served as a financial backbone for tech companies[11] and it becomes even more critical as tech firms continue to scramble to find sustainable AI business models.

It is no surprise that Anthropic soon came back to the negotiating table with the Pentagon.[12] Anthropic, like any other tech company – Google, Amazon, Microsoft, IBM, OpenAI, Palantir – is far from innocent. The company’s AI model Claude has already been applied to intelligence work. In 2025, Anthropic, along with Google, OpenAI and Elon Musk’s axis, was awarded a $200 million dollar contract with DoW to advance AI use in national security. In January of this year, in partnership with Palantir Technologies, Claude was used in the kidnapping of Venezuelan President Nicolás Maduro and his wife Cilia Flores.[13] Though Anthropic currently resists surveilling the American people, it appears willing to serve naked U.S. imperialism.

While we should hold these companies accountable and pressure them to resist the militarization of technology, we shouldn’t fool ourselves into thinking that any of them will sacrifice profit or business interests for some vague “moral commitment.” It is an imperative of capitalism that tech companies will ultimately do whatever it takes to compete, survive, and stay in the market – even if it means getting blood on their hands.

Shinjoung Yeo & Dan Schiller


[1] Shira Ovide , Anthropic lost the Pentagon but won over America, Washington Post, March 6, 2026.

[2] Amrith Ramkumar, Trump Administration Shuns Anthropic, Embraces OpenAI in Clash Over Guardrails, Wall Street Journal, February 27, 2026.

[3] Ibid.

[4] Anwar Shaikh, Capitalism: Competition, Conflict and Crisis (Oxford University Press, 2016), 259-283.

[5] Cade Metz and Adam Satariano , OpenAI Raises $110 Billion to Fuel Growth, Extending A.I. Boom, New York Times, February 27, 2026.

[6]OpenAI expects business to burn $115 billion through 2029, The Information reports,” Reuters, September 6, 2026.

[7] Jon Markman ,Anthropic: The $380 Billion Powerhouse Hiding In Plain Sight, Forbes. February 13, 2026.

[8] Tara Copp Elizabeth Dwoskin and Ian Duncan , Anthropic’s AI tool Claude central to U.S. campaign in Iran, amid a bitter feud , Washington Post, March 4, 2026.

[9] Lyle Daly, The Magnificent Seven makes up one-third of the S&P 500 – should investors be concerned? Yahoo Finance. October 29, 2025.

[10] Jurica Dujmovic, Investors are making a big bet on Big Tech’s AI spending. They’re about to learn if it paid off, Morning Star, January 28, 2026.

[11] Yasha Levine, Surveillance Valley: The Secret Military History of the Internet (Icon Books, 2018); John Bellamy , and Robert W. McChesney, Surveillance Capitalism : Monopoly – Finance Capital , the Military – Industrial Complex , and the Digital Age, Monthly Review 66, no. 3, July-August 2014.

[12] George Hammond and Cristina Criddle, Anthropic chief back in talks with Pentagon about AI deal, Financial Times, March 4, 2026.

[13] Nick Robins-Early, What does the US military’s feud with Anthropic mean for AI used in war? Guardian, March 7, 2026 ; Ian Duncan and Tara Copp, After a deadly raid, an AI power struggle erupts at the Pentagon, Washington Post, February 22, 2026 ; Nick Robins-Early. Anthropic says it ‘cannot in good conscience’ allow Pentagon to remove AI checks, Guardian, February 26, 2026.

Leave a Reply

Your email address will not be published. Required fields are marked *