OpenAI Is Bleeding Cash. Its Solution? Military Contracts.

In an age of algorithmically generated “kill lists,” anxieties about AI integration into military decision-making are justifiably mounting. OpenAI’s recent hiring of over a dozen former defense bureaucrats does nothing to allay these concerns.

OpenAI’s multibillion-dollar expenditures significantly outstrip its revenue. To plug its financial hole, the company is attempting to solidify its connection with one of the richest user bases possible: the US military. (Mandel Ngan / AFP via Getty Images)

Long before concerns mounted over the role of artificial intelligence in combat — including its role in civilian deaths in the Iran war — OpenAI, the maker of ChatGPT, was quietly embedding itself inside the national security state to profit from algorithmic warfare.

That included hiring a bipartisan roster of over a dozen government insiders with decades of experience in national security positions between them, plus inking a partnership with a top Trump-connected military contractor.

After years of shaping US defense policy, these insiders are now assisting the AI giant to cash in on the Trump administration’s unprecedented defense spending — no matter the ethical quandaries involved.

The personnel moves appeared to pay off last month, when one of those hires reportedly helped OpenAI secure a $200 million defense contract within hours of the White House icing out rival AI company Anthropic over its concerns about its technology being used for surveillance and automated weapons.

OpenAI’s hiring strategy apparently began in January 2024, when the company quietly revised its usage policies, removing long-standing language prohibiting the use of the company’s advanced AI models for “military and warfare” purposes.

At the time, OpenAI’s ChatGPT chatbot, under the guidance of CEO Sam Altman, was leading the industry with over 100 million weekly users, who were using it for a wide range of everyday personal and professional tasks, from writing and research to planning and advice.

Keen-eyed observers quickly picked up on the policy change. The firm initially claimed it was just a clarification to ensure the company’s usage policies remained readable, telling the Intercept that they had simply wanted to “create a set of universal principles that are both easy to remember and apply.” However, when pressed further, an OpenAI spokesman admitted the company harbored a desire to pursue “national security use cases.”

At the time, the firm provided few details about how it planned to integrate its products into the sprawling military-industrial complex. But its internal actions have proved illuminating: The firm embarked on a quiet hiring spree, deepening its connections with the Department of Defense by recruiting national security state insiders with close ties to those in power.

Concerns about the integration of AI into military decision-making have been mounting, thanks in part to the Gaza war, in which the Israeli military has reportedly used the technology to determine bombing coordinates and to generate a “kill list” of targets.

Like many other AI firms, OpenAI’s multibillion-dollar expenditures have significantly outstripped its revenue, leaving the firm hemorrhaging cash. Its 2024 warfare pivot could have been a strategic plot to plug its financial hole with massive Department of Defense contracts, following other prominent Silicon Valley companies that have looked to the Pentagon as a business model of the future.

And to ensure OpenAI was best positioned to win potential defense contracts, the firm appeared to turn to a reliable tool of corporate influence: the revolving door.

An Unprecedented Hiring Spree

Beginning in early 2024, OpenAI started bringing in new hires of all political stripes with experience on Capitol Hill, the National Security Council, the Department of Defense, and other parts of the national security establishment.

In February 2024, just a month after lifting its prohibition on military use of OpenAI models, the firm hired Katrina Mulligan as head of national security partnerships, a role in which she “structur[es] agreements with [Department of Defense] and national security customers.” Previously, Mulligan served in the Biden administration as a staffer for several top-level Defense Department officials.

That included serving on the senior staff advising the assistant secretary of defense for Special Operations and Low-Intensity Conflict (SO/LIC). Despite being relatively unknown, SO/LIC is among the most important civilian positions in the US military, with oversight over the United States Special Operations Command.

According to military journalist Seth Harp, Mulligan’s boss at SO/LIC held “the pinnacle position in the secret military-within-the-military created after 9/11 to do assassinations and abductions all over the world.”

Harp, an expert on the Pentagon’s special operations programs, said Mulligan’s SO/LIC connections could prove helpful if OpenAI pursues contracts from the military operations’ secretive “black budget,” the classified budget that funds the military’s special operations.

“There’s lots of money to be made in that,” said Harp.

Four months after Mulligan’s hiring, in June 2024, OpenAI announced another high-profile hire: Gen. Paul Nakasone joined the company’s board of directors. Nakasone, a retired four-star general in the US Army, served as director of the National Security Agency and commander of US Cyber Command from 2018 to 2024, holding two of the most powerful positions in the US national security state. OpenAI claimed the hiring would help the company make “critical safety and security decisions.”

OpenAI’s hiring spree continued through the summer. In August 2024, the firm brought on Morgan Dwyer and Benjamin Schwartz, senior Biden officials involved in implementing the 2022 CHIPS and Science Act, which provided billions in government subsidies for the development of US computer chip manufacturing, including for defense systems.

While brought on to support data center and infrastructure development, both Dwyer and Schwartz previously served in national security roles. Among other positions, Dwyer worked as the senior-most aide to the civilian leader managing military research and technologies, including artificial intelligence.

Schwartz, meanwhile, served as a longtime adviser in the Office of the Secretary of Defense under the Obama administration, where he advised on issues relating to terrorism and South Asia policy.

That same month, OpenAI brought in Sasha Baker to serve as its head of national security policy. Baker, a former national security adviser to Sen. Elizabeth Warren (D-Mass.) and deputy chief of staff to Obama Secretary of Defense Ashton Carter, also served as a senior Biden official in both the Biden National Security Council and Department of Defense.

In the following months, OpenAI brought on two other Biden administration staffers: a former deputy spokesperson for Biden’s National Security Council and a former assistant to the assistant secretary of defense for Indo-Pacific security affairs.

While OpenAI hoovered up former Biden administration personnel, it also shored up its ranks with Republican officials.

In April 2024, the company hired Matt Rimkunas, a former deputy chief of staff and legislative director for Sen. Lindsey Graham (R-S.C.), to lead its federal affairs team and later that fall added another former Graham staffer, Meghan Dorn, to its roster. Both Dorn and Rimkunas are registered to lobby on behalf of OpenAI.

While neither had extensive executive-branch experience, their former boss’ record as the foremost hawk in the Senate (and an appropriator who chairs the Budget Committee) could have boosted their profile.

Friends in High Places

In June 2025, OpenAI announced one of its first major defense-industry initiatives. The AI giant disclosed plans to team up with Peter Thiel–backed defense technology firm Anduril to “improve the nation’s defense systems that protect U.S. and allied military personnel from attacks by unmanned drones.”

Anduril has benefited immensely from its position as one of the most deeply Trump-connected defense firms. (Among other links, the company’s founder, Palmer Luckey, is the brother-in-law to one-time Trump Attorney General nominee Matt Gaetz.) Along with generating revenue, the partnership could have provided Altman, OpenAI’s CEO, with an opportunity to ingratiate himself with the tech right’s defense-focused startup culture, which has been brewing since Trump reentered the Oval Office with Thiel and other Silicon Valley venture capitalists by his side.

OpenAI’s integration into Washington’s national security apparatus continued apace in 2025, as the company secured a $200 million Defense Department contract in June. Per the arrangement, the company would provide the Department of Defense with AI capabilities “in both warfighting and enterprise domains.”

The following month, the company hired additional national security experts.

In July 2025, OpenAI brought on a new “head of government,” Joseph Larson, a former Anduril executive and one-time deputy chief digital and artificial intelligence officer in the Office of the Secretary of Defense. For his role in “advancing the responsible adoption of artificial intelligence to bolster national security and operational efficiency across federal missions,” Larson was feted by the government contracting community, receiving a “Wash100 award” from the contracting networking company Executive Mosaic in 2026. Larson’s connections would soon prove pivotal for the AI company.

OpenAI also added former California Sen. Laphonza Butler, a Democrat and former member of the Senate Committee on Homeland Security and Governmental Affairs, to its roster of advisers last July.

That summer, the revolving door spun in the opposite direction when the Pentagon commissioned OpenAI Chief Product Officer Kevin Weil and former OpenAI Chief Research Officer Bob McGrew as officers in the Army Reserve’s “Innovation Corps” to lend their technological expertise to military brass. Neither has been required to recuse himself from future contracting discussions between the Pentagon and OpenAI.

In the months that followed, OpenAI added Connie LaRossa, a former deputy assistant secretary of legislative affairs for the Department of Homeland Security and legislative director at the Department of Defense, to its staff. LaRossa has also worked on Google’s national security team, served as a lobbyist with Cornerstone Government Affairs, and worked for a subsidiary of the defense contracting behemoth General Dynamics.

Around that same time, the AI giant also hired a former national security adviser for the Biden Department of Justice and a former communications staffer for the Under Secretary of Defense for Research and Engineering.

Right Place, Right Time

Despite the hiring spree, OpenAI’s models were still not the Defense Department’s preferred AI option at the outset of the war with Iran. Instead, the Pentagon had signed a major $200 million defense contract with Anthropic last year.

But last month, as the Trump administration soured on Anthropic over its opposition to its models being used for military applications without sufficient safeguards, OpenAI capitalized on the fracas by signing a new $200 million military contract.

The deal reportedly came together under the direction of Larson, the former Secretary of Defense staffer turned OpenAI head of government, whom the Pentagon contacted when its attempt to renew its contract with Anthropic began to flounder.

The use of AI in the Iran war has come under scrutiny. The Pentagon has refused to say whether or not artificial intelligence was used in the February 28 bombing of an elementary school in Iran that killed 175 people, many of them children.