Blog Published: Apr 18, 2026

Bain Pyxis Compromise: When a Frontend Credential Owns Eleven Databases and the Identity Plane

This case is easy to explain and dangerous for the same reason: Bain Pyxis shipped a live AI service-account credential inside a public frontend bundle. CodeWall used that one login to enter production in 18 minutes, query 11 databases through raw-SQL and model-connected tools, and then write into Bain's Okta tenant through a separate GraphQL path.

One leaked service account became the key to data, conversations, prompts, and identity. The lesson is to treat an AI platform's service account like a production admin account: never ship it to the browser, never let it span the whole data layer, and never let it reach the IdP write plane.

Agentic AIIdentity & AccessData ExfiltrationEnterprise AI
5 applicable AIDEFEND defenses
Source: How We Hacked Bain's Competitive Intelligence Platform 
By Paul Price (CodeWall) · Original article: Apr 13, 2026

Threat Analysis

  • Step 1: download the public frontend and pull out the password. Pyxis shipped a real AI service-account username/password inside a JS bundle that any browser could download. CodeWall used that credential and reached production in 18 minutes. The first failure was simple: a live secret was left in a public file.
  • Step 2: discover that the same account can see almost everything. That one service account had read/write across 11 databases and hundreds of roles. It reached consumer-transaction data, client schemas, AI conversations, and the Pyxis system prompt table.
  • Step 3: turn the foothold into direct data access. A Pyxis API endpoint accepted raw SQL and returned database errors, and platform LLM functions could query live production tables. In practice, the leaked credential turned the AI platform into a front door for the database.
  • Step 4: pivot from data theft to identity takeover. A separate GraphQL path allowed account creation and directory writes inside Bain's Okta tenant without a new privileged approval step. The attacker could move from reading data to creating lasting access.
  • Why this matters: activity logs held full JWTs, export features accepted attacker-chosen destinations, and one API could clone the whole database. This was not just a read-only leak; it was a full tamper-and-persist path built around one over-privileged service account.

Applicable AIDEFEND Defenses (5)

AID-H-004.002
Service & API Authentication
Very High
Every leg of this attack chain rests on IAM failures around a single AI service account: its password was shipped in a frontend bundle, it held read/write across 11 databases and hundreds of roles, its JWTs lived for 365 days without MFA and were written verbatim into an activity log, and its session reached a GraphQL endpoint that could mutate the corporate Okta directory. Real IAM for AI means privileged credentials never sit in client-side artifacts, service accounts are scoped per asset class (not per platform), tokens are short-lived and never logged in full, and an AI backend cannot reach the enterprise IdP's write plane.
AID-H-019.002
Policy-Based Access Control
Very High
Pyxis exposed LLM function invocation against live production tables with 8 models available — effectively turning every model into a query tool carrying the service account's full reach, plus additional primitives for bulk export to attacker-controlled destinations and a single-call full DB clone. Policy-based access control means the model cannot call database, export, or admin primitives without a separate authorization evaluated per action type, per data class, and per destination; 'query prod' is never a single exposed tool.
AID-H-018.002
Least-Privilege Tool Architecture
High
Pyxis exposed raw SQL and bulk-export primitives behind the AI platform. Replacing catch-all database tools with narrow, single-purpose functions for approved questions, schemas, and export paths would reduce the blast radius even if one service credential leaks.
AID-H-033.002
Cross-Tenant Serving-State Isolation
High
Every Fortune 500 client sat in its own schema inside the same database, reachable from the same AI service account — tenant separation in name only. The 9,989 AI conversations included cross-client competitive-intelligence questions authored by named client employees, all readable from a single foothold. True multi-tenant isolation requires each client's data to sit behind a distinct identity, schema boundary, and retrieval policy, so an LLM session initiated by one client cannot query another client's schema regardless of agent code.
AID-H-017
System Prompt Hardening
Medium
The 18,621-character Pyxis system prompt — containing the report methodology, SQL schema definitions, and analytical frameworks the platform is built on — was retrievable through conversation metadata by any authenticated caller. The prompt is both IP and a recon aid: its embedded schema names are a blueprint of the production database. Hardening means the prompt reaches the model through a privileged channel only, is never returned verbatim in user-reachable responses or metadata, and is protected by integrity controls so tampered variants cannot silently load.

What Defenders Should Do Now

  • Rotate every AI service-account credential, then audit git history, frontend build artifacts, and config-as-code repos for any residual copies; assume anything shipped to a browser is already leaked.
  • Re-scope every AI service account to the minimum per-asset-class permission set. No account should hold read/write across your whole AI data layer just because the platform is convenient to build that way.
  • Separate the AI platform's credentials from any IdP write path. An AI backend should never be able to mutate your Okta, Azure AD, or Google Workspace directory on its own.
  • Audit every LLM function and tool the platform exposes — gate each call by data class, action type, and destination. Retire any 'run SQL on prod' catch-all primitive and any bulk-export endpoint that accepts attacker-controlled destinations.
  • Add secret-scanning to your frontend release gate and wire autonomous red-team testing into release / drift checks. Bain's leaked credential was in production long enough that a passive scanner found it; a traditional pentest did not.

2 additional considerations

Frontend build secret detection as a release gate

Beyond the IAM hardening above, teams shipping AI platforms should also consider automated secret scanning on every frontend build artifact before the build reaches production CDN. Bain's leaked credential sat inside a bundle an autonomous agent could download passively; any secret-detection tool that had run against that bundle would have flagged it.
Recommendation: Wire a gitleaks / trufflehog-style scanner into the frontend CI pipeline with builds blocked on any high-confidence credential match; pair it with a weekly sweep of already-deployed bundles and a standing incident playbook for any hit found in production.

Autonomous offensive testing as a routine operational control

CodeWall's autonomous agent chained this entire attack in hours; a scoped human pentest had not caught it. Defenders running AI platforms can additionally layer in continuous machine-speed red-team assessment that covers authentication coverage, SQLi, IdP write paths, LLM tool surface, and secret leakage on every release.
Recommendation: Wire an autonomous testing agent into release gates and production drift checks as a second layer on top of traditional AppSec, so checklist-style pentests no longer define the upper bound of what you can catch.

Conclusion

Bain Pyxis shows how fast an AI platform collapses when a frontend secret is really a production admin key. The model was not the first thing attacked; the identity behind it was. Once that identity could touch databases, prompts, export paths, and Okta, an ordinary credential leak turned into full AI-platform compromise. AIDEFEND 's IAM, tool-authorization, multi-tenant-isolation, and system-prompt hardening techniques cover the defensive floor; the operational lesson is that an AI platform's service account must be scoped, audited, and monitored like the most privileged database admin in the environment.