LiteLLM Supply Chain Attack Hits 97 Million Downloads: What AI Security Means for CRE Investors

What is the LiteLLM supply chain attack? The LiteLLM supply chain attack is a critical cybersecurity incident discovered on March 24, 2026, in which threat actors injected credential-stealing malware into LiteLLM, the most widely used open-source AI proxy in the Python ecosystem with approximately 97 million monthly downloads. For CRE investors who rely on AI tools for underwriting, property management, and deal analysis, this attack exposes a growing risk in the AI software supply chain that could compromise sensitive financial data and tenant information. For a full overview of AI tools used in the industry, see our guide on AI tools for real estate investors.

Key Takeaways

  • LiteLLM versions 1.82.7 and 1.82.8 contained credential-stealing malware that harvested SSH keys, cloud credentials, API keys, and crypto wallets
  • The attack was part of a broader campaign by threat group TeamPCP that compromised three major AI and security tools in one week
  • CRE firms using AI tools built on LiteLLM should assume credential compromise and immediately rotate all API keys and passwords
  • Over 600 public GitHub projects had unpinned LiteLLM dependencies, highlighting systemic supply chain risk in enterprise AI
  • AI supply chain security is now a critical due diligence item for CRE investors evaluating AI tool vendors

What Happened: The LiteLLM Compromise

On March 24, 2026, security researcher isfinne discovered that LiteLLM version 1.82.8, published on the Python Package Index (PyPI), contained a credential-stealing payload. Within hours, version 1.82.7 was confirmed to carry a similar malicious payload through a different injection method. According to Wiz Security, simply installing the compromised version triggered the malware with no import statement required.

The malicious code harvested SSH keys, cloud credentials, Kubernetes configurations, crypto wallets, and API keys. It encrypted the stolen data and exfiltrated it via a POST request to models.litellm.cloud, a lookalike domain controlled by the attackers rather than the legitimate LiteLLM team at BerriAI. The compromised versions were available for approximately three hours before PyPI quarantined the entire LiteLLM package, but with roughly 3.4 million downloads per day, the exposure window affected thousands of organizations.

Why CRE Investors Should Pay Attention

LiteLLM is not a consumer-facing product. It is infrastructure: a proxy layer that sits between AI applications and large language models like ChatGPT, Claude, and Gemini. It is used by thousands of enterprise applications to route API calls, manage model switching, and handle authentication with multiple AI providers. Many CRE technology platforms and custom AI tools are built on top of LiteLLM without CRE firms even knowing it.

Property management platforms at risk. CRE firms using AI-powered tools for tenant screening, lease abstraction, NOI analysis, and property condition assessments may be running software built on LiteLLM. If a vendor's development environment was compromised, the credentials used to access property management systems like Yardi, AppFolio, or RealPage, along with financial databases and tenant records, could be in the hands of attackers.

Custom AI deployments exposed. CRE firms that have built custom AI underwriting tools, deal analysis pipelines, or automated reporting systems using Python and LLM APIs likely have LiteLLM somewhere in their dependency tree. The attack demonstrates that even firms with strong internal security can be compromised through their open-source dependencies.

The shadow AI problem compounds risk. As we covered in our analysis of shadow AI agents flooding enterprises, 80% of organizations have unmanaged AI tools operating outside IT governance. Employees who independently deployed AI tools built on LiteLLM may have unknowingly exposed company credentials without the security team's knowledge.

The TeamPCP Campaign: A Coordinated Attack

The LiteLLM compromise was not an isolated incident. It was the third major supply chain attack by a threat group known as TeamPCP in a single week:

  • March 19: Trivy, a widely used security vulnerability scanner from Aqua Security, was compromised with 44 repositories defaced
  • March 21: Checkmarx and KICS GitHub Actions were compromised, affecting CI/CD pipelines across hundreds of organizations
  • March 24: LiteLLM was compromised after TeamPCP obtained the maintainer's PyPI credentials through the prior Trivy breach

TeamPCP deliberately targets the tools that organizations trust implicitly: vulnerability scanners and API gateways. These tools have the broadest access to credentials and infrastructure, making them ideal vectors for cascading supply chain attacks. For CRE firms, this pattern means that both security tools and AI tools can become attack surfaces. Our coverage of the McKinsey AI security breach highlighted a similar dynamic where a trusted internal AI platform became an entry point for attackers who accessed 46.5 million messages and 728,000 files.

Financial and Operational Impact for CRE Firms

The financial implications of AI supply chain compromises for CRE firms are significant and growing as AI adoption accelerates. The AI in real estate market is projected to reach $1.3 trillion by 2030 at 33.9% CAGR (Source: industry research). As CRE firms increase AI adoption, the attack surface grows proportionally.

  • Credential rotation costs: Firms that used compromised LiteLLM versions must rotate all API keys, cloud credentials, SSH keys, and database passwords across every system accessible from the affected machines. For large CRE portfolios with dozens of property management platforms and financial systems, this process can take days and cost tens of thousands of dollars in labor
  • Data breach liability: If tenant personally identifiable information or financial data was exfiltrated through compromised AI tools, CRE firms may face regulatory penalties under state data breach notification laws, particularly in California, New York, and Texas where DSCR calculations and rent rolls contain sensitive borrower and tenant data
  • Vendor trust erosion: CRE firms that outsource AI capabilities to third-party vendors must now add supply chain security audits to their vendor due diligence process, increasing procurement timelines and costs
  • Insurance implications: Cyber insurance carriers are increasingly scrutinizing AI tool governance. Firms without formal AI supply chain security policies may face higher premiums or coverage exclusions

What CRE Investors Should Do Now

  • Audit your AI tool stack: Ask every AI vendor whether their products use LiteLLM or any component compromised by TeamPCP. Request a software bill of materials (SBOM) from each vendor
  • Rotate credentials immediately: If any system in your organization installed LiteLLM version 1.82.7 or 1.82.8, assume full credential compromise and rotate every key, token, and password accessible from that machine
  • Pin dependency versions: For custom AI deployments, ensure all Python dependencies are pinned to specific verified versions rather than using floating version ranges that automatically pull new releases
  • Implement AI governance: Establish a formal approval process for AI tools, closing the shadow AI gap that allows unvetted tools into production environments
  • Add supply chain security to vendor due diligence: Include questions about dependency management, code signing, and incident response in your AI vendor evaluation process

If you are ready to strengthen your AI security posture while maximizing the benefits of AI adoption, The AI Consulting Network specializes in helping CRE investors implement AI tools with proper governance and risk management. For comprehensive AI cybersecurity guidance, see our analysis of Google's $32 billion Wiz acquisition and what it means for CRE.

Frequently Asked Questions

Q: What is LiteLLM and why does it matter to CRE investors?

A: LiteLLM is the most popular open-source proxy for routing API calls to AI models like ChatGPT, Claude, and Gemini. With 97 million monthly downloads, it is embedded in thousands of enterprise AI applications, including many used by CRE firms for underwriting, deal analysis, and property management. When LiteLLM was compromised, any CRE firm using tools built on it was potentially exposed to credential theft.

Q: How do I know if my CRE firm was affected by the LiteLLM attack?

A: Ask your IT team and AI vendors whether LiteLLM is used in any internal or third-party tools. Check Python dependency files (requirements.txt, pyproject.toml) for LiteLLM references. If versions 1.82.7 or 1.82.8 were installed at any point, assume credential compromise and begin rotating all passwords and API keys immediately.

Q: What is a software supply chain attack?

A: A software supply chain attack occurs when attackers compromise a widely used software component rather than targeting individual organizations directly. By injecting malware into a trusted package like LiteLLM, attackers can reach thousands of organizations simultaneously. For CRE firms, this means that even strong internal security cannot fully protect against compromised dependencies in AI tools.

Q: How can CRE firms protect against future AI supply chain attacks?

A: Key protections include requesting software bills of materials from AI vendors, pinning dependency versions in custom deployments, establishing formal AI governance processes, and conducting regular security audits of all AI tools. For personalized guidance on implementing these practices, connect with Avi Hacker, J.D. at The AI Consulting Network.