TL;DR

On 24 March 2026, attackers poisoned LiteLLM—a popular AI gateway library—on PyPI, compromising NASA, Netflix, Stripe and NVIDIA by stealing cloud credentials and SSH keys. Australian SMBs using cloud workloads face identical risks: stolen tokens, lateral movement and silent data exfiltration. Three controls stop this cold: conditional access, privileged identity management and proper log retention.​‌‌​‌‌​​‍​‌‌​‌​​‌‍​‌‌‌​‌​​‍​‌‌​​‌​‌‍​‌‌​‌‌​​‍​‌‌​‌‌​​‍​‌‌​‌‌​‌‍​​‌​‌‌​‌‍​‌‌​​​‌​‍​‌‌‌​​‌​‍​‌‌​​‌​‌‍​‌‌​​​​‌‍​‌‌​​​‌‌‍​‌‌​‌​​​‍​​‌​‌‌​‌‍​​‌‌​​‌​‍​​‌‌​​​​‍​​‌‌​​‌​‍​​‌‌​‌‌​‍​​‌​‌‌​‌‍​‌‌‌​​‌‌‍​‌‌​‌‌​‌‍​‌‌​​​‌​‍​​‌​‌‌​‌‍​‌‌​‌‌​​‍​‌‌​​‌​‌‍​‌‌‌​​‌‌‍​‌‌‌​​‌‌‍​‌‌​‌‌‌‌‍​‌‌​‌‌‌​‍​‌‌‌​​‌‌

What Happened

LiteLLM is an open-source Python library that routes API calls to over 100 LLM providers. It averages roughly 97 million downloads per month and sits deep inside cloud environments. On 24 March 2026, threat actors tracked as TeamPCP published two malicious versions—1.82.7 and 1.82.8—directly to PyPI using stolen publisher credentials.

The attack was only discovered because the malware contained a bug: a recursive fork bomb that crashed machines and raised alarms. Without that coding error, the compromise would likely have remained silent for weeks.​‌‌​‌‌​​‍​‌‌​‌​​‌‍​‌‌‌​‌​​‍​‌‌​​‌​‌‍​‌‌​‌‌​​‍​‌‌​‌‌​​‍​‌‌​‌‌​‌‍​​‌​‌‌​‌‍​‌‌​​​‌​‍​‌‌‌​​‌​‍​‌‌​​‌​‌‍​‌‌​​​​‌‍​‌‌​​​‌‌‍​‌‌​‌​​​‍​​‌​‌‌​‌‍​​‌‌​​‌​‍​​‌‌​​​​‍​​‌‌​​‌​‍​​‌‌​‌‌​‍​​‌​‌‌​‌‍​‌‌‌​​‌‌‍​‌‌​‌‌​‌‍​‌‌​​​‌​‍​​‌​‌‌​‌‍​‌‌​‌‌​​‍​‌‌​​‌​

‌‍​‌‌‌​​‌‌‍​‌‌‌​​‌‌‍​‌‌​‌‌‌‌‍​‌‌​‌‌‌​‍​‌‌‌​​‌‌

How They Got In

The breach chain started five days earlier. On 19 March, TeamPCP compromised Trivy, a widely used vulnerability scanner, by exploiting a misconfigured CI/CD workflow and an over-privileged personal access token that had not been revoked. Because LiteLLM used Trivy in its own pipeline, the poisoned scanner harvested the LiteLLM PyPI publish token from runner memory. The attackers then uploaded the malicious packages straight to PyPI, bypassing GitHub review entirely.

Version 1.82.8 was the more dangerous payload. It included a 34 KB .pth file that executes automatically whenever the Python interpreter starts—no import required. This means simply having the package installed was enough to trigger the malware.

What They Took

Once active, the payload performed a comprehensive credential harvest:

  • Cloud and AI provider keys: OpenAI, Anthropic, Google Vertex AI and AWS tokens stored in environment variables
  • Infrastructure secrets: SSH private keys from ~/.ssh/, Git credentials and repository access tokens
  • System intelligence: Hostnames, network routing tables and authentication logs from /var/log/auth.log

For SMBs running AI workloads or developer tools in cloud tenants, this is a worst-case scenario: the keys to your kingdom live in environment variables and CI/CD secrets, exactly where the malware looked first.

What It Cost

While exact financial damages are still being assessed, the affected organisations faced immediate credential rotation across multiple cloud providers, emergency incident response and reputational exposure. For an Australian SMB with 20 staff and a lean IT budget, a comparable breach typically costs between AUD 150,000 and 500,000 when you factor in downtime, forensic investigation, regulatory notification and recovery. The Australian Cyber Security Centre (ACSC) reports that small businesses are among the most targeted victims precisely because they lack the detection layers that would catch this activity early.

What Would Have Prevented It

1. Conditional Access

The stolen PyPI token and harvested cloud credentials should never have worked from unknown locations or non-compliant devices. Conditional access policies enforce rules such as: require MFA from all locations, block sign-ins from high-risk countries and deny access from unmanaged devices. If your Microsoft 365, AWS or Google Cloud tenant does not have conditional access configured, a stolen password or leaked token is an instant open door. SMBs should implement at least one risk-based policy this week—block legacy authentication and require MFA for all admin roles.

2. Privileged Identity Management

The Trivy compromise succeeded because a privileged personal access token with excessive scope was not revoked after a prior incident. Privileged Identity Management (PIM) eliminates standing admin rights by requiring just-in-time elevation, approval workflows and time-bound access. For Australian SMBs using Microsoft Entra ID or AWS IAM Identity Center, enabling PIM means your global administrator or root credentials do not exist in a usable state 24/7. If the LiteLLM pipeline had used short-lived tokens with minimal publish scope, the attackers would have harvested a worthless credential.

3. Log Retention and Centralised Monitoring

The malware searched auth logs and dumped environment variables. Without retained logs, you cannot detect impossible travel, mass file access or credential abuse after the fact. Australian SMBs should enforce a 90-day minimum log retention policy across cloud identity providers, endpoint detection platforms and server workloads. Forward those logs to a centralised location—whether that is Microsoft Sentinel, Google Security Operations or a managed SIEM—and tune alerts for high-confidence events such as mailbox rule creation, mass downloads and sign-ins from anonymising proxies.

FAQ

Q: We are a 15-person business with no developers. Does a supply chain attack like LiteLLM affect us?

A: Yes. If you use cloud-based accounting, CRM or productivity tools that integrate AI features, those platforms may depend on libraries like LiteLLM upstream. The stolen credentials from such breaches are frequently used to pivot into customer tenants.

Q: Is conditional access expensive to set up?

A: No. Microsoft Entra ID P1 and Google Workspace Enterprise both include conditional access at no extra cost beyond the licence you likely already pay for. AWS IAM Identity Center offers comparable rules free of charge.

Q: How quickly should we rotate credentials if a vendor reports a breach?

A: Immediately for high-privilege secrets, and within 24 hours for all remaining credentials. The LiteLLM attackers moved from Trivy to PyPI in hours, not days.

Q: Do we need a full SOC, or is centralised logging enough for an SMB?

A: Centralised logging with 90-day retention and basic alerting is the pragmatic starting point. You do not need a 24/7 SOC on day one, but you do need enough signal to spot anomalous sign-ins and data access before encryption or exfiltration occurs.

Conclusion

The March 2026 LiteLLM breach proves that supply chain attacks are no longer theoretical—they are fast, automated and devastating. Australian SMBs cannot afford to assume that cloud providers will catch these issues upstream. Implement conditional access to stop stolen credentials at the door, deploy privileged identity management to shrink your attack surface and retain logs long enough to detect and respond. These three controls are not enterprise luxuries; they are the baseline for surviving 2026.

Ready to audit your cloud tenant? Visit consult.lil.business for a free cybersecurity assessment tailored to Australian SMBs.

References

  1. Australian Cyber Security Centre - Small Business Cyber Security Guide
  2. NIST Cybersecurity Framework Version 2.0
  3. Mandiant / Google Cloud - TeamPCP Supply Chain Campaign Analysis

TL;DR

  • Someone hid bad code in a popular AI tool called LiteLLM
  • 47,000 people downloaded it before anyone noticed
  • The bad code stole passwords and keys from computers automatically
  • You can protect yourself by checking your software and changing your passwords

What Happened? (Like You're 10)

Imagine you order a pizza from your favorite restaurant. The delivery driver brings it to your house, you pay, and you eat it. Everything seems normal.

But what if someone had poisoned the pizza ingredients at the factory before they even reached the restaurant? The restaurant made the pizza perfectly — they just couldn't see the poison hidden inside.

That's what happened with LiteLLM.

LiteLLM is like a "universal remote" for AI — it helps companies use different AI tools (like ChatGPT, Claude, and others) without rewriting their code each time. It's very popular because it saves developers a lot of time [1].

On March 24, 2026, bad guys managed to sneak a "poisoned" version of LiteLLM onto the website where developers download it. For 46 minutes, anyone who downloaded LiteLLM got the poisoned version instead of the safe one [2].

How Did They Sneak It In?

Think of it like this:

  1. The real LiteLLM is like a recipe book that helps you cook with different AI ingredients
  2. The bad guys added a secret page to the recipe book
  3. That secret page said: "Before you start cooking, copy all the keys and passwords from this house and send them to me"
  4. The copying happened automatically — you didn't have to do anything wrong

The bad guys used something called a .pth file. In Python (the programming language LiteLLM is written in), .pth files run automatically whenever Python starts. It's like hiding instructions in a book that make you do something every time you open the book, not just when you read that specific page [3].

What Did the Bad Guys Steal?

The poisoned code looked for and stole [2]:

  • SSH keys — These are like special keys that let you unlock servers remotely. It's like giving someone the key to your office building.
  • Cloud credentials — Passwords for services like Amazon Web Services, Google Cloud, and Microsoft Azure. These can cost you real money if bad guys use your account.
  • Environment variables — Secret settings that often contain passwords and API keys.
  • Cryptocurrency wallets — If you had any crypto on your machine, the malware looked for it.
  • Kubernetes configs — Settings for managing cloud applications.

How Many People Were Affected?

The numbers are pretty big [2]:

  • 47,000 downloads in just 46 minutes
  • 2,337 other software packages depend on LiteLLM (meaning if you installed one of them, you might have gotten the poisoned LiteLLM too)
  • 88% of those packages were set up in a way that would automatically get the poisoned version

How Do You Know If You Were Affected?

If you use Python and installed anything between March 24-25, 2026, check:

  1. Run this command: pip show litellm
  2. Look at the version number
  3. If it says 1.82.7 or 1.82.8, you downloaded the poisoned version

What Should You Do If You Were Affected?

If you think you might have been affected, here's your action plan:

  1. Change all your passwords — Especially for cloud services (AWS, Google Cloud, Azure) and any databases
  2. Generate new SSH keys — The old ones might have been stolen
  3. Check your cloud bills — Look for any unusual activity
  4. Update LiteLLM — Get the latest safe version
  5. Tell your IT person — If you have one, they need to know

How Can You Prevent This in the Future?

Here are some simple ways to protect yourself:

  1. Pin your versions — Instead of saying "give me the latest LiteLLM," say "give me LiteLLM version 1.80.0 exactly." This prevents auto-updates to bad versions [4].

  2. Use a lock file — Tools like pip freeze > requirements.txt save the exact versions you're using. This acts like a receipt showing exactly what you installed.

  3. Scan your dependencies — Tools like pip-audit can check your installed packages against a database of known bad versions [5].

  4. Don't store secrets in code — Use environment variables or secret managers instead of putting passwords directly in your code.

  5. Keep backups — If something goes wrong, you want to be able to restore from a known-good state.

The Good News

The security researchers at FutureSearch found the problem quickly and told PyPI (the Python package repository) within hours. PyPI removed the bad versions within 46 minutes [6].

The researchers also documented exactly how they found the problem and fixed it, which helps everyone learn how to respond faster next time.

FAQ

Yes! The bad versions (1.82.7 and 1.82.8) have been removed from PyPI. You can safely install LiteLLM now — just make sure you're getting the latest version, which has been cleaned up [7].

If your business doesn't use Python, you weren't affected by this specific attack. However, similar supply chain attacks have happened in other programming languages like JavaScript (npm packages) and Ruby (RubyGems). The same protection principles apply [8].

You probably won't know immediately. The malware ran silently in the background. The safest approach is to assume any password that was on your computer during the attack window was compromised and change it.

Unfortunately, no. Supply chain attacks are becoming more common. In 2024, attackers compromised the colors and faker npm packages, affecting millions of downloads. The trend is increasing, which is why learning to protect yourself is so important [8].

No! Open-source packages are still generally safe and provide huge value. The key is to use them carefully: pin your versions, scan for vulnerabilities, and keep your dependencies updated. The benefits far outweigh the risks when you follow good practices [9].


References

[1] LiteLLM Documentation, "LiteLLM - Getting Started," BerriAI, 2026. [Online]. Available: https://docs.litellm.ai/

[2] D. Hnyk, "LiteLLM Hack: Were You One of the 47,000?," FutureSearch Blog, Mar. 25, 2026. [Online]. Available: https://futuresearch.ai/blog/litellm-hack-were-you-one-of-the-47000/

[3] C. McMahon, "Supply Chain Attack in litellm 1.82.8 on PyPI," FutureSearch Blog, Mar. 24, 2026. [Online]. Available: https://futuresearch.ai/blog/litellm-pypi-supply-chain-attack

[4] S. Wright, "Why You Should Pin Your Dependencies," Hynek Schlawack Blog, 2024. [Online]. Available: https://hynek.me/articles/python-pins/

[5] PyPI, "pip-audit: A tool for scanning Python environments for vulnerable packages," Python Packaging Authority, 2026. [Online]. Available: https://pypi.org/project/pip-audit/

[6] C. McMahon, "My minute-by-minute response to the LiteLLM malware attack," FutureSearch Blog, Mar. 25, 2026. [Online]. Available: https://futuresearch.ai/blog/litellm-attack-transcript/

[7] PyPI, "LiteLLM Package History," Python Package Index, 2026. [Online]. Available: https://pypi.org/project/litellm/

[8] Sonatype, "State of the Software Supply Chain Report 2025," Sonatype Inc., 2025. [Online]. Available: https://www.sonatype.com/state-of-the-software-supply-chain

[9] Open Source Security Foundation, "Securing the Open Source Software Supply Chain," OpenSSF, 2025. [Online]. Available: https://openssf.org/


Worried about software supply chain attacks? lilMONSTER helps small businesses secure their development tools and respond to security incidents. Book a consultation to protect your business.

Ready to strengthen your security?

Talk to lilMONSTER. We assess your risks, build the tools, and stay with you after the engagement ends. No clipboard-and-leave consulting.

Get a Free Consultation