Blog Published: Apr 17, 2026

McKinsey Lilli Compromise: When SQL Injection Reaches the AI Prompt Layer

This chain starts with an old-fashioned web bug. Lilli exposed 22 unauthenticated API endpoints, and one of them built SQL from attacker-controlled JSON keys. CodeWall used that path to read and write the database behind McKinsey's AI platform: 46.5M chat messages, 728K files, 3.68M RAG chunks, vector-store metadata, and the system prompts shaping 43,000+ users.

The important point is not just that SQL injection happened. It is that one web bug reached the prompt and retrieval tables, so the compromise landed directly in the AI layer.

Agentic AIPrompt IntegrityData ExfiltrationEnterprise AI
5 applicable AIDEFEND defenses
Source: How We Hacked McKinsey's AI Platform 
By Paul Price (CodeWall) · Original article: Mar 9, 2026

Threat Analysis

  • Step 1: start with the unauthenticated API surface. Lilli exposed 22 endpoints without authentication. That gave CodeWall's autonomous agent a wide place to probe without needing an account first.
  • Step 2: use one endpoint that turns JSON keys into SQL. One API built SQL queries from attacker-controlled JSON keys. The agent chained that SQL injection with IDOR and was reading the production database in about 2 hours.
  • Step 3: realize the database also holds the AI platform's crown jewels. The exposed tables were not just ordinary business records. They included 46.5M chat messages, 728K files, 3.68M RAG chunks with S3 paths, vector-store metadata, AI assistants, and 95 system prompts across 12 models.
  • Step 4: move from reading the AI layer to rewriting it. The same SQLi allowed UPDATE. That meant an attacker could change a system prompt, poison retrieved content, remove guardrails, or insert exfiltration instructions with a simple HTTP request and without touching application code.
  • Why this matters: Lilli ran this way for 2 years with 43K+ users and 500K+ monthly prompts. Human scanning missed it; the autonomous agent found it in hours. McKinsey closed the unauthenticated endpoints and patched the issue on 2026-03-02 after CodeWall's 2026-03-01 report.

Applicable AIDEFEND Defenses (5)

AID-H-022.002
Runtime Integrity Enforcement (Signed Configurations)
Very High
Prompts stored in a plaintext database are exactly the attack surface CodeWall exploited. Signed-configuration enforcement — system prompts carrying cryptographic signatures verified at load time — prevents the 'single UPDATE statement' silent rewrite. Even if the DB is compromised, tampered prompts fail verification and do not load.
AID-H-021.001
Chunk-Level Integrity Signing
Very High
3.68 million RAG chunks with S3 paths were exposed and — through the same SQLi — modifiable. Chunk-level signing means every retrieved chunk is verified at retrieval time; a tampered chunk is rejected before it reaches the model. Without this, any DB read/write bug becomes a RAG poisoning vector.
AID-D-004.005
Runtime Prompt Integrity Verification
Very High
This is the exact failure mode: the model loaded system prompts straight from the DB with no runtime check that they matched the last approved version. Hashing and signing approved prompts and re-verifying at every request catches any DB-side tamper before the prompt reaches inference, even when audit logs fail.
AID-H-019.001
Tool Parameter Constraint & Schema Validation
High
The SQL injection path began with attacker-controlled JSON keys becoming query structure. Strict schemas for API and tool parameters should reject unexpected keys, types, and table selectors before anything reaches SQL construction, especially on endpoints that can touch prompts, RAG chunks, files, or chat history.
AID-H-030.002
Lifecycle-Stage Authorization Gate
Medium
The architectural problem was that chat history, RAG chunks, system prompts, user accounts, and vector store metadata all lived in one queryable database. Enforcing an authorization gate per lifecycle stage — separate credentials and access paths for each asset class — would have limited a single SQLi to one class instead of all of them.

What Defenders Should Do Now

  • Inventory every place your AI platform stores prompts, RAG chunks, vector store metadata, and agent configurations. Move them behind integrity-signed storage and verify at load or retrieval time.
  • Assume a single SQLi or IDOR could hit the prompt table. Add DB-level audit logs and real-time alerts on UPDATE to prompt, config, or RAG tables — treat any such write outside deployment pipelines as a security incident.
  • Run a default-deny audit of every API endpoint on your AI platform. Lilli had 22 unauthenticated endpoints across 200+; the real number in your stack is likely non-zero too.
  • Separate AI-asset storage by class (prompts / RAG / chat history / identity / vector metadata) into distinct stores with distinct credentials, so one bug cannot compromise all layers at once.
  • Add autonomous red-team assessment as a routine control. Human scanning alone missed this for 2 years of production use.

2 additional considerations

Blast-radius isolation between AI asset classes

Beyond the techniques mapped above, teams running large AI platforms should also consider physical or logical isolation between storage layers — prompts in one store, RAG chunks in another, chat history in a third — so a single database compromise cannot cascade into total AI platform compromise the way it did here.
Recommendation: Separate AI-asset stores by class (prompts / RAG / chat history / user identity / vector metadata) into distinct databases or schemas with distinct credentials and network paths, so lateral movement requires multiple independent failures rather than one SQLi.

Autonomous offensive testing as a routine operational control

CodeWall's autonomous agent found this chain in about 2 hours; human scanning had missed it for 2 years. Defenders running AI platforms can additionally layer in continuous machine-speed red-team assessment that tests authentication coverage, SQLi, and AI-asset-layer tamper paths on every release.
Recommendation: Wire an autonomous testing agent into release gates and production drift checks — not as a replacement for traditional AppSec, but as a second layer that keeps up with the rate at which endpoints and AI assets are added to the platform.

Conclusion

McKinsey Lilli is a clear example of how an old web bug becomes a very modern AI-platform failure. Once SQL injection reached the prompt and retrieval tables, the attacker was no longer just looking at a database; they were in a position to rewrite how the system answered. AIDEFEND 's prompt-integrity, chunk-signing, and staged-authorization techniques map directly to that problem. The operational lesson is to treat prompts, RAG chunks, and vector metadata as assets that need their own integrity controls, not as ordinary rows sitting safely inside a database.