Prompt injection poisons GitHub Actions cache and exfiltrates secrets in Cline incident
A prompt injection in Cline’s AI-powered GitHub issue triage poisoned shared caches and leaked release secrets, underscoring the need for CI/CD-grade LLM security controls. In the Cline case, an attacker embedded commands in an issue title to trick an AI triager running Claude Code with broad tool access, leading to a malicious npm install and cache poisoning; shared cache keys let a nightly release workflow load the tainted node_modules and leak NPM publish secrets, resulting in a compromised 2.3.0 release later retracted ([details](https://simonwillison.net/2026/Mar/6/clinejection/#atom-everything)). This chain shows how untrusted inputs to agents, write-enabled tools, and shared caches create a supply-chain blast radius. OWASP’s LLM Top 10 and Agentic Top 10 map the exact risks involved—prompt injection, sensitive info disclosure, supply chain, excessive agency, and more—and a practical 12-step guide offers code-level mitigations like input sanitization, output guarding, least privilege, and rate/consumption controls ([best practices](https://dev.to/jaipalsingh/enterprise-ai-security-12-best-practices-for-deploying-llms-in-production-525j)). Apply CI/CD hygiene too: remove write/exec tools from triage jobs, isolate caches and runners, and keep secrets out of any agent-exposed context.