Jack Clark, one of Anthropic’s co-founders who also serves as Head of Public Benefit for Anthropic PBC, confirmed that the AI company had briefed the Trump administration about its new Mythos model.
The model, announced last week, is so dangerous that it’s not being released to the public, largely due to its alleged powerful cybersecurity capabilities.
In an interview at the Semafor World Economy summit this week, Clark explained why the company was still engaged with the U.S. government while simultaneously suing them.
This March, Anthropic filed a lawsuit against Trump’s Department of Defense (DOD) after the agency labeled the company a supply-chain risk. Anthropic had clashed with the Pentagon over whether the military should have unrestricted access to Anthropic’s AI systems for use cases that included mass surveillance of Americans and fully autonomous weapons. (OpenAI ended up winning the deal instead.)
At the conference, Clark downplayed the administration’s labeling of its business as a supply-chain risk, saying it was merely a “narrow contracting dispute” and that Anthropic didn’t want it to get in the way of the fact that the company cares about national security.
“Our position is the government has to know about this stuff, and we have to find new ways for the government to partner with a private sector that is making things that are truly revolutionizing the economy, but are going to have aspects to them which hit National Security, equities, and other ones,” said Clark. “So absolutely, we talked to them about Mythos, and we’ll talk to them about the next models as well.”
His confirmation comes after reports last week that Trump officials were encouraging banks to test Mythos, including JPMorgan Chase, Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley.
Clark also addressed other aspects of AI’s impact on society during the interview, including things like unemployment and higher education.
Previously, Anthropic CEO Dario Amodei has warned that AI’s advances could bring unemployment to Depression-era numbers, but Clark slightly disagrees. He explained in the interview that Amodei believes that AI will get much more powerful than people expect very quickly, so he’s using that as the basis of his estimations.
Clark, who leads a team of economists at Anthropic, said that the company is so far only seeing “some potential weakness in early graduate employment” across select industries. He noted that Anthropic is ready in case there are major employment shifts, however.
Pushed to say what majors college students today should be pursuing or avoiding, as a result of AI’s impacts, Clark would only broadly suggest that the most important majors are those that “involve synthesis across a whole variety of subjects and analytical thinking about that.”
“That’s because what AI allows us to do is it allows you to have access to sort of an arbitrary amount of subject matter experts in different domains,” Clark said. “But the really important thing is knowing the right questions to ask and having intuitions about what would be interesting if you collided different insights from many different disciplines.”
{content}
Source: {feed_title}
Key Takeaways:
- Paradoxical Partnership: Anthropic, an AI safety leader, is simultaneously briefing the U.S. government on its most powerful and dangerous unreleased AI model, Mythos, while actively suing the Department of Defense over contracting disputes, showcasing a complex, dual-track engagement with national security.
- Mythos’s Dual-Use Dilemma: The Mythos model, heralded for its potent cybersecurity capabilities, is deemed too risky for public release, highlighting the escalating challenges surrounding advanced AI’s dual-use potential for both significant benefit and grave misuse.
- Navigating AI’s Societal Impact: Anthropic co-founder Jack Clark offers a nuanced perspective on AI’s economic effects, tempering dire unemployment predictions with current data, and emphasizing the critical role of interdisciplinary synthesis and analytical thinking in future-proofing education.
Anthropic’s Tightrope Walk: Briefing the Government on ‘Dangerous’ AI Amidst Lawsuit
In a revealing interview at the Semafor World Economy summit, Jack Clark, a co-founder and the Head of Public Benefit for Anthropic PBC, confirmed what many in the tech and policy spheres had speculated: Anthropic has briefed the Trump administration on its new, highly advanced—and highly dangerous—Mythos AI model. This disclosure comes even as the frontier AI company is embroiled in a legal battle with the U.S. government, painting a vivid picture of the intricate and often contradictory relationship between pioneering AI developers and national authorities.
Mythos Unveiled (and Withheld): A Glimpse into Powerful, Perilous AI
Last week, the tech world buzzed with news of Anthropic’s Mythos model, a system so potent and potentially hazardous that the company has opted to keep it under wraps, away from public release. Clark elaborated that Mythos’s formidable cybersecurity capabilities are a primary driver behind this decision, underscoring the delicate balance between innovation and risk mitigation in the AI landscape. The revelation that government officials were already encouraging major financial institutions—including titans like JPMorgan Chase, Goldman Sachs, and Citigroup—to test Mythos, further signals the model’s perceived significance and the immediate implications of its power.
The very nature of Mythos, described as “dangerous” due to its advanced cybersecurity prowess, brings into sharp focus the “dual-use” dilemma inherent in cutting-edge AI. Tools designed to bolster defenses can, in the wrong hands, become potent offensive weapons, capable of unprecedented disruption. Anthropic’s decision to brief the government, despite withholding public access, illustrates a commitment to responsible disclosure, acknowledging that the implications of such technology extend far beyond commercial applications and directly impact national security frameworks.
The Paradox of Engagement: Suing the DOD While Safeguarding National Security
Perhaps the most intriguing aspect of Clark’s interview was his explanation for Anthropic’s continued engagement with the U.S. government, even as the company actively pursues legal action against them. In March, Anthropic filed a lawsuit against the Department of Defense (DOD) after the agency branded the company a “supply-chain risk.” This label stemmed from a fundamental clash over the military’s desired unrestricted access to Anthropic’s AI systems, particularly for controversial applications such as mass surveillance of American citizens and the development of fully autonomous weapons. Notably, OpenAI ultimately secured the contract that Anthropic had contested.
Clark, however, sought to downplay the lawsuit, characterizing it as a “narrow contracting dispute.” He stressed that this disagreement should not overshadow Anthropic’s overarching commitment to national security. “Our position is the government has to know about this stuff, and we have to find new ways for the government to partner with a private sector that is making things that are truly revolutionizing the economy, but are going to have aspects to them which hit National Security, equities, and other ones,” Clark asserted. This statement encapsulates the complex tightrope walk many leading AI companies face: balancing corporate interests and ethical boundaries with the imperative to inform and collaborate with government entities on technologies with profound geopolitical ramifications.
The lawsuit itself underscores a broader tension in the AI ecosystem: who controls these powerful tools, and for what purposes? Anthropic’s stance suggests a push for guardrails and responsible deployment, even when it means challenging potential government overreach. Their ongoing briefings, despite legal friction, demonstrate a recognition that critical dialogue is essential for managing the risks and maximizing the benefits of AI at a national level.
AI’s Societal Footprint: Rethinking Employment and Education
Beyond the geopolitical chess game, Clark also delved into AI’s far-reaching societal impacts, particularly concerning employment and higher education. These discussions reflect a growing urgency within the AI community to proactively address the transformative effects of their creations.
The conversation inevitably touched upon the stark warnings issued by Anthropic CEO Dario Amodei, who previously cautioned that AI’s rapid advancements could trigger unemployment rates reminiscent of the Great Depression. Clark, leading a team of economists at Anthropic, offered a more tempered view. He attributed Amodei’s more dire predictions to a belief in AI’s exceptionally rapid and profound power growth. For now, Clark noted, Anthropic’s data indicates only “some potential weakness in early graduate employment” in specific sectors, acknowledging, however, that the company is actively preparing for potentially significant future employment shifts.
When pressed on the best academic paths for college students in an AI-dominated future, Clark refrained from prescribing specific majors. Instead, he championed an approach centered on foundational skills: majors that “involve synthesis across a whole variety of subjects and analytical thinking about that.” His reasoning highlights AI’s role as an unparalleled information access tool. “What AI allows us to do is it allows you to have access to sort of an arbitrary amount of subject matter experts in different domains,” Clark explained. Therefore, the critical human skill becomes “knowing the right questions to ask and having intuitions about what would be interesting if you collided different insights from many different disciplines.” This vision emphasizes critical thinking, creativity, and interdisciplinary problem-solving as the enduring pillars of human value in an increasingly automated world.
Bottom Line
Anthropic’s current posture—simultaneously informing the government about its most advanced, unreleased AI while suing a key defense agency—epitomizes the intricate challenges at the frontier of artificial intelligence. It highlights the profound tension between rapid technological innovation, national security imperatives, and the ethical responsibilities of AI developers. The Mythos model, too dangerous for public release yet deemed essential for government review and even bank testing, underscores the immediate need for robust governance frameworks and transparent dialogue. As AI continues its exponential ascent, Anthropic’s actions serve as a bellwether for the ongoing struggle to define the boundaries of private power, public oversight, and the ultimate societal impact of humanity’s most transformative technology.

