The U.S. Department of Defense is urging artificial intelligence firms to permit the American armed forces to employ their innovations for “all legitimate ends,” yet Anthropic is notably resisting this demand, according to a recent report in Axios.
The authorities are allegedly presenting this identical request to OpenAI, Google, and xAI. An unnamed representative from the Trump administration informed Axios that one of these companies has consented, while the remaining two are believed to have demonstrated some adaptability.
Concurrently, Anthropic has allegedly proven the most unyielding. Consequently, the Department of Defense is reportedly menacing to terminate its substantial $200 million agreement with the artificial intelligence firm.
The Wall Street Journal disclosed in January that considerable discord existed among Anthropic and Defense Department personnel regarding the application of its Claude models. The WSJ thereafter claimed that Claude had been employed in the American military’s mission to apprehend the at-the-time Venezuelan head of state, Nicolás Maduro.
Anthropic offered no prompt reply to TechCrunch’s inquiry for a statement.
A representative for the firm communicated to Axios that the company has “not deliberated the deployment of Claude for particular missions with the Department of War” but rather is “prioritizing a distinct group of Usage Policy inquiries — specifically, our firm restrictions concerning completely self-governing armaments and widespread internal monitoring.”
{content}
Source: {feed_title}
