Bain Pyxis Compromise: When a Frontend Credential Owns Eleven Databases and the Identity Plane
This case is easy to explain and dangerous for the same reason: Bain Pyxis shipped a live AI service-account credential inside a public frontend bundle. CodeWall used that one login to enter production in 18 minutes, query 11 databases through raw-SQL and model-connected tools, and then write into Bain's Okta tenant through a separate GraphQL path.
One leaked service account became the key to data, conversations, prompts, and identity. The lesson is to treat an AI platform's service account like a production admin account: never ship it to the browser, never let it span the whole data layer, and never let it reach the IdP write plane.
Threat Analysis
- Step 1: download the public frontend and pull out the password. Pyxis shipped a real AI service-account username/password inside a JS bundle that any browser could download. CodeWall used that credential and reached production in 18 minutes. The first failure was simple: a live secret was left in a public file.
- Step 2: discover that the same account can see almost everything. That one service account had read/write across 11 databases and hundreds of roles. It reached consumer-transaction data, client schemas, AI conversations, and the Pyxis system prompt table.
- Step 3: turn the foothold into direct data access. A Pyxis API endpoint accepted raw SQL and returned database errors, and platform LLM functions could query live production tables. In practice, the leaked credential turned the AI platform into a front door for the database.
- Step 4: pivot from data theft to identity takeover. A separate GraphQL path allowed account creation and directory writes inside Bain's Okta tenant without a new privileged approval step. The attacker could move from reading data to creating lasting access.
- Why this matters: activity logs held full JWTs, export features accepted attacker-chosen destinations, and one API could clone the whole database. This was not just a read-only leak; it was a full tamper-and-persist path built around one over-privileged service account.
Applicable AIDEFEND Defenses (5)
What Defenders Should Do Now
- Rotate every AI service-account credential, then audit git history, frontend build artifacts, and config-as-code repos for any residual copies; assume anything shipped to a browser is already leaked.
- Re-scope every AI service account to the minimum per-asset-class permission set. No account should hold read/write across your whole AI data layer just because the platform is convenient to build that way.
- Separate the AI platform's credentials from any IdP write path. An AI backend should never be able to mutate your Okta, Azure AD, or Google Workspace directory on its own.
- Audit every LLM function and tool the platform exposes — gate each call by data class, action type, and destination. Retire any 'run SQL on prod' catch-all primitive and any bulk-export endpoint that accepts attacker-controlled destinations.
- Add secret-scanning to your frontend release gate and wire autonomous red-team testing into release / drift checks. Bain's leaked credential was in production long enough that a passive scanner found it; a traditional pentest did not.
2 additional considerations
Frontend build secret detection as a release gate
Autonomous offensive testing as a routine operational control
Conclusion
Bain Pyxis shows how fast an AI platform collapses when a frontend secret is really a production admin key. The model was not the first thing attacked; the identity behind it was. Once that identity could touch databases, prompts, export paths, and Okta, an ordinary credential leak turned into full AI-platform compromise. AIDEFEND 's IAM, tool-authorization, multi-tenant-isolation, and system-prompt hardening techniques cover the defensive floor; the operational lesson is that an AI platform's service account must be scoped, audited, and monitored like the most privileged database admin in the environment.