Blog Published: Apr 18, 2026

ClawHub `google-qx4`: Malicious SKILL.md Prerequisites as Agent-Driven Social Engineering

Snyk describes an active ClawHub campaign where a fake Google skill (`google-qx4`, `NET_NiNjA`) embeds a bogus `openclaw-core` prerequisite in `SKILL.md`, pushing users to run attacker-controlled installers from GitHub or Rentry. The lesson is that skill security has to inspect instruction text and out-of-band setup guidance, not just code, because the agent can become the attacker's most convincing social-engineering layer.

Agentic AIAI Supply ChainMalwareRegistry Security
6 applicable AIDEFEND defenses

Threat Analysis

  • The malicious artifact is mostly instructions, not code. The ClawHub skill does not ship the final payload in the repo; it hides the attack in a fake prerequisite inside SKILL.md, where the agent reads its operating instructions.
  • The human is the execution boundary. By asking the user to install a fabricated openclaw-core dependency, the skill bypasses the agent's own sandboxing. The user pastes the command or runs the binary; the compromise happens outside the agent runtime.
  • The delivery chain is built for evasion. Snyk found a password-protected ZIP on GitHub for Windows victims and a mutable Rentry paste for macOS/Linux that decoded into a download-and-execute stager from an attacker-controlled domain.
  • This is agent-driven social engineering. The skill uses the agent's authority and helpful tone to make a malicious setup step feel routine, shifting the attack from tricking the model to tricking the human through the model.
  • Registry response still has to outrun clones. ClawHub's account-age and report-threshold controls help, yet Snyk says lookalike skills can reappear within hours, so one-time takedowns are not enough.

Applicable AIDEFEND Defenses (6)

AID-H-031.001
Skill Metadata & Manifest Honesty Validation
Very High
This case used a legitimate-looking Google integration story and a fabricated openclaw-core prerequisite. Metadata honesty validation is what should flag brand impersonation, suspicious prerequisite claims, and other metadata designed to manipulate human trust.
AID-H-031.002
Instruction-Layer Semantic Security Analysis
Very High
The highest-signal malicious content lived in prose, not code. Admission scanners need to read SKILL.md semantically and catch directions that tell the agent or the user to fetch external binaries, paste shell commands, or follow setup steps unrelated to the declared function.
AID-M-001.003
Agentic Skill Asset Inventory & Lifecycle Governance
High
Treat every installed ClawHub skill as its own governed asset with approval state, owner, version, and disable path. That gives defenders a way to find google-qx4-style clones quickly and revoke them across developer fleets when a campaign is identified.
AID-H-019.007
Skill-Level Permission Manifest Validation & Runtime Enforcement
High
Skills should have to declare external binaries, download domains, shell requirements, and manual setup steps up front. A Gmail integration that suddenly requires an undeclared third-party installer or terminal command should fail admission or be quarantined for review.
AID-H-031.005
Admission Policy Orchestration & Continuous Re-Scan Governance
High
Snyk's reporting shows that flagged skills can return as clones within hours. Registry operators and enterprise mirrors need continuous re-scan, policy-driven auto-hide, and incident-triggered disable workflows instead of treating review as a one-time gate.
AID-H-031.003
Manifest-vs-Observed Behavioral Consistency Testing
Medium
Candidate skills should be exercised in an isolated test flow before approval. If a purported Google productivity skill immediately pivots into manual binary installation, off-platform downloads, or shell guidance, the observed behavior does not match the declared purpose and should fail admission.

What Defenders Should Do Now

  • Search your fleets and developer notes for google-qx4, NET_NiNjA, or any ClawHub Google skill that required manual openclaw-core installation. If it was used, isolate the machine and inspect for persistence.
  • Audit installed skills and registry submissions for SKILL.md or README instructions that tell users to download external binaries, run terminal commands, or fetch password-protected archives.
  • Require every skill to declare external domains, binaries, shell requirements, and setup flows in a structured manifest. If a prerequisite sits outside that manifest, block or quarantine the install.
  • Run semantic instruction scanning and sandboxed consistency tests on candidate skills before publish or install; markdown instructions are part of the attack surface, not harmless documentation.
  • Build rapid takedown and clone-response workflows: minimum account age, community reporting, continuous re-scan, and enterprise-wide disable when a malicious skill family is identified.

1 additional consideration

Last-mile mediation for agent-requested manual installs

Beyond the techniques mapped above, teams running local agent ecosystems should also add a dedicated UX safety layer for any skill that asks a human to paste a shell command, download an external archive, or install a binary outside the normal package flow. This case worked because the request looked like a routine setup step coming from a trusted agent.
Recommendation: Quarantine any skill-generated prerequisite that requires manual terminal execution. Show publisher, domain, and artifact provenance inline, require explicit step-up confirmation, and default-deny copy-pasteable commands from unapproved destinations.

Conclusion

This incident is a useful correction to how teams think about AI supply chain risk. The malicious artifact was largely text, not executable code, and the compromise path ran through human trust more than agent RCE. AIDEFEND  maps well here to skill admission analysis, manifest enforcement, and lifecycle governance; the extra work left is putting stronger last-mile friction between agent advice and human command execution.