AI Safety Standoff: Has Anthropic Silently Pulled the Plug on Biden's Vision?

Anthropic is the latest company to remove its AI-safety pledge from the Biden-era partnership quietly.

Mar 6, 2025
AI Safety Standoff: Has Anthropic Silently Pulled the Plug on Biden's Vision?
Ethics

The AI field is constantly changing and recently, Anthropic, a prominent AI company, appears to have distanced itself from past events. The enterprise has reportedly removed several voluntary commitments jointly issued with the Biden administration in 2023 from its official website. The commitments that were once considered strong evidence of Anthropic's push and embrace of AI safety and "trustworthy" AI, have seemingly vanished.

The Changing Landscape 

The Midas Project, an AI oversight organization, was the first to notice this change. Experts from the organization observed that previously publicly declared commitments regarding sharing AI risk management information with the government and industry, and research on AI bias and discrimination, which were displayed in Anthropic's transparency center, disappeared last week. Only commitments related to reducing AI-generated sexually abusive imagery remain.

While the actions of Anthropic have been remarkably low-key and secretive, it's even more baffling that no change has been publicly announced. They have remained silent in the face of media inquiries, fueling speculation about their motives.

Earlier Agreements 

Recall that in July 2023, Anthropic and other tech giants like OpenAI, Google, Microsoft, Meta, and Inflection, publicly announced their commitment to a series of AI safety pledges in response to the Biden administration's call. This commitment came with good intentions and included rigorous internal and external safety testing before the AI system release and significant investment in cybersecurity to protect sensitive AI data. It also contained guidelines on the development of watermarks for AI-generated content.

It's worth knowing that these commitments were not legally binding, and Anthropic had already adopted many of these practices. However, within the Biden administration's strategic framework, this voluntary agreement held deeper political significance. It served as a key signal of the administration's AI policy priorities before a more comprehensive AI executive order is expected soon.

Indeed, times have changed, and so has the United States president. The Trump administration has publicly stated that it has a different approach to AI governance, indicating a significant shift from its predecessor's. Early this year, the Trump administration rescinded the Biden administration's AI executive order. This order was designed to guide the National Institute of Standards and Technology in developing industry guidelines to help companies identify and correct flaws in models, including bias. 

Critics close to the Trump administration argued that the Biden administration's order's burdensome reporting requirements forced companies to disclose trade secrets, which they found unacceptable. As a result, Trump signed a new executive order instructing federal agencies to promote AI development "unburdened by ideological biases," aiming to foster "human flourishing, economic competitiveness, and national security." Again, it's worth mentioning that Trump's new order makes no mention of combating AI discrimination, a central tenet of the Biden administration's initiative.

Wider Effects 

As explained by the Midas Project on social media, the Biden-era AI safety commitments never implied a time limit, neither were they linked to the current president's party affiliation. Even after the November election, several companies that signed the commitments publicly stated their continued adherence.

However, following the short months since the Trump administration took office, Anthropic is just one of the tech companies adjusting its public policy stance. Recently, OpenAI announced its embrace of "intellectual freedom," emphasizing an open approach regardless of how sensitive or controversial a topic might be while ensuring their AI systems don't unduly censor specific viewpoints. 

Like Anthropic, OpenAI also quietly removed a page from its website dedicated to its commitments to diversity, equity, and inclusion (DEI). These DEI initiatives had previously faced fierce criticism from the Trump administration, leading many companies to halt or significantly adjust their DEI programs.

Meanwhile, some of Trump's AI advisors, including Marc Andreessen, David Sacks, and Elon Musk, have publicly accused tech giants like Google and OpenAI of implementing "AI censorship" by artificially limiting the responses of their AI chatbots. However, multiple labs, including OpenAI, have denied that their policy adjustments are directly influenced by political pressure.

Addressing the Concerns 

As the controversy continues unabated, Anthropic finally broke its silence, releasing a statement to address the concerns:

"We remain firmly committed to the voluntary AI commitments established by the Biden administration. The progress and specific actions of these commitments continue to be reflected in the content of [our] transparency center. To avoid further confusion, we will add a dedicated section directly referencing our progress."

However, it remains to be seen whether this belated statement fully alleviates concerns. Questions have arisen, asking if Anthropic's "withdrawal of commitments" is a misunderstanding or a calculated strategic shift. Time will tell.