Security researchers have uncovered malicious code in LiteLLM, an open-source AI platform developed by Y Combinator alum Krrish Dholakia. Despite boasting secure compliance certifications from Delve, the project was hit with a sophisticated malware that stole login credentials and expanded its reach through compromised dependencies.
The malware snuck into LiteLLM via a third-party dependency, compromising thousands of users in just days before being detected by research scientist Callum McMahon. The sloppy coding even caused McMahon’s own machine to crash, ironically highlighting the vulnerability.
Delve, the AI-powered compliance startup that provided these certifications, has faced previous accusations of generating fake data and using unqualified auditors to rubber-stamp reports. While Delve denies these allegations, the current incident raises serious doubts about the validity of the security assurances LiteLLM offered its users.
The irony is not lost on many in tech; as Andrej Karpathy noted, the malware’s poor design suggests it was ‘vibe coded.’ Meanwhile, LiteLLM’s CEO remains tight-lipped, focusing instead on rectifying the situation and sharing learnings with the developer community after a thorough forensic review.
This episode serves as a stark reminder of the importance of rigorous security practices in the AI space, even for projects that appear to be well-protected by certifications. The tech industry is left pondering how real such assurances truly are in an environment where seemingly secure systems can fall victim to such deceptions.







