openclaw-core prerequisite. Metadata honesty validation is what should flag brand impersonation, suspicious prerequisite claims, and other metadata designed to manipulate human trust.ClawHub `google-qx4`: Malicious SKILL.md Prerequisites as Agent-Driven Social Engineering
Snyk describes an active ClawHub campaign where a fake Google skill (`google-qx4`, `NET_NiNjA`) embeds a bogus `openclaw-core` prerequisite in `SKILL.md`, pushing users to run attacker-controlled installers from GitHub or Rentry. The lesson is that skill security has to inspect instruction text and out-of-band setup guidance, not just code, because the agent can become the attacker's most convincing social-engineering layer.
Threat Analysis
- The malicious artifact is mostly instructions, not code. The ClawHub skill does not ship the final payload in the repo; it hides the attack in a fake prerequisite inside
SKILL.md, where the agent reads its operating instructions. - The human is the execution boundary. By asking the user to install a fabricated
openclaw-coredependency, the skill bypasses the agent's own sandboxing. The user pastes the command or runs the binary; the compromise happens outside the agent runtime. - The delivery chain is built for evasion. Snyk found a password-protected ZIP on GitHub for Windows victims and a mutable Rentry paste for macOS/Linux that decoded into a download-and-execute stager from an attacker-controlled domain.
- This is agent-driven social engineering. The skill uses the agent's authority and helpful tone to make a malicious setup step feel routine, shifting the attack from tricking the model to tricking the human through the model.
- Registry response still has to outrun clones. ClawHub's account-age and report-threshold controls help, yet Snyk says lookalike skills can reappear within hours, so one-time takedowns are not enough.
Applicable AIDEFEND Defenses (6)
SKILL.md semantically and catch directions that tell the agent or the user to fetch external binaries, paste shell commands, or follow setup steps unrelated to the declared function.google-qx4-style clones quickly and revoke them across developer fleets when a campaign is identified.What Defenders Should Do Now
- Search your fleets and developer notes for
google-qx4,NET_NiNjA, or any ClawHub Google skill that required manualopenclaw-coreinstallation. If it was used, isolate the machine and inspect for persistence. - Audit installed skills and registry submissions for
SKILL.mdor README instructions that tell users to download external binaries, run terminal commands, or fetch password-protected archives. - Require every skill to declare external domains, binaries, shell requirements, and setup flows in a structured manifest. If a prerequisite sits outside that manifest, block or quarantine the install.
- Run semantic instruction scanning and sandboxed consistency tests on candidate skills before publish or install; markdown instructions are part of the attack surface, not harmless documentation.
- Build rapid takedown and clone-response workflows: minimum account age, community reporting, continuous re-scan, and enterprise-wide disable when a malicious skill family is identified.
1 additional consideration
Last-mile mediation for agent-requested manual installs
Conclusion
This incident is a useful correction to how teams think about AI supply chain risk. The malicious artifact was largely text, not executable code, and the compromise path ran through human trust more than agent RCE. AIDEFEND maps well here to skill admission analysis, manifest enforcement, and lifecycle governance; the extra work left is putting stronger last-mile friction between agent advice and human command execution.