This particular incident feels like a plotline directly lifted from an HBO satirical series, a classic Silicon Valley real-life drama. Just this week, an utterly dreadful piece of malicious software came to light within an open-source endeavor, the brainchild of Y Combinator alumnus LiteLLM.
LiteLLM grants developers effortless access to scores of AI models and offers functionalities such as expenditure oversight. It has proven to be a resounding triumph, downloaded as frequently as 3.4 million times daily, according to Snyk, one of the numerous security researchers monitoring the unfolding event. The undertaking had garnered 40,000 stars on GitHub and thousands of forks (users who adapted and customized it for their own purposes).
The malicious code was identified, thoroughly documented, and publicly disclosed by research scientist Callum McMahon of FutureSearch, a firm that provides AI agents for web investigation. The malware infiltrated the system via a “dependency,” meaning other open-source software that LiteLLM relied upon. Subsequently, it purloined the login credentials of every system it touched. Utilizing these credentials, the malware gained entry to additional open-source packages and accounts to amass further credentials, and so forth.
The malicious program caused McMahon’s computer to cease functioning after he acquired LiteLLM. This occurrence prompted him to delve into the matter and uncover it. Ironically, a flaw within the malware itself caused his machine to crash. Given how haphazardly this particular nasty code was engineered, he (along with renowned AI researcher Andrej Karpathy) concluded it must have been developed without much forethought, or “vibe coded.”
The creators of LiteLLM have been diligently laboring non-stop this week to rectify the predicament, and the positive news is that it was detected relatively quickly, likely within mere hours.
There’s another chapter to this ongoing narrative that users on X cannot cease discussing. LiteLLM, as of March 25 when we observed it, still prominently displays on its web presence that it has successfully cleared two significant security compliance accreditations: SOC2 and ISO 27001.
However, it utilized a nascent company named Delve for these certifications.
Techcrunch happening
San Francisco, CA
|
October 13-15, 2026
Delve is the Y-Combinator AI-powered compliance startup that has faced accusations of misleading its clientele regarding their genuine compliance adherence by purportedly generating fictitious data and employing auditors who merely rubber-stamp reports. Delve has vehemently denied these allegations.
A nuanced aspect merits consideration here. Such certifications are designed to demonstrate that an organization maintains robust security policies to mitigate the potential for incidents like this one. Certifications do not inherently prevent a company, such as LiteLLM, from falling victim to malicious software. While SOC 2 is meant to encompass policies related to software dependencies, malware can still find a way to infiltrate.
Even so, as engineer Gergely Orosz noted on X upon observing people jesting about it online, “Oh goodness, I truly believed this WAS a jest. … yet no, LiteLLM *actually* was ‘Secured by Delve.’”
Regarding LiteLLM, CEO Krrish Dholakia offered no statement concerning the engagement of Delve. He remains preoccupied with rectifying the unfortunate predicament stemming from being an attack casualty.
“Our foremost objective is the ongoing inquiry alongside Mandiant. We are dedicated to disseminating the technical insights gained with the developer community once our forensic examination reaches completion,” he communicated to TechCrunch.
{content}
Source: {feed_title}

