Cisco open-sources CodeGuard as research flags predictable LLM code flaws
Cisco donated its CodeGuard security framework to OASIS’s Coalition for Secure AI as new research shows LLM code assistants repeat predictable vulnerabilities, raising the bar for secure-by-default AI coding workflows. Details of the open donation and integration targets (Cursor, Copilot, Windsurf, Claude Code) are in OASIS Open’s announcement [Cisco Donates Project CodeGuard to Coalition for Secure AI](https://www.oasis-open.org/2026/02/09/cisco-donates-project-codeguard-to-coalition-for-secure-ai/)[^1]. Complementary research findings show vulnerability persistence and a black-box FSTab method with up to 94% attack success on LLM-generated apps [AI Code Generation Tools Repeat Security Flaws, Creating Predictable Software Weaknesses](https://quantumzeitgeist.com/ai-security-code-generation-tools-repeat-flaws/)[^2], with broader context on latent backdoors in “clean” AI code [Backdoors With Manners](https://hackernoon.com/backdoors-with-manners-when-ai-writes-clean-code-that-turns-malicious-later?source=rss)[^3] and sector-specific safety layers emerging in healthcare [Inside Guardrails AI](https://www.webpronews.com/inside-guardrails-ai-how-a-seattle-startup-is-deploying-clinical-expertise-to-neutralize-the-most-dangerous-failures-in-artificial-intelligence/)[^4]. [^1]: Adds: Official details on CodeGuard scope, integrations, and governance via CoSAI. [^2]: Adds: Research summary explaining FSTab, vulnerability recurrence metrics, and attack success rates. [^3]: Adds: Perspective on behavioral trojans and delayed-malicious code patterns. [^4]: Adds: Example of domain-specific safety guardrails in production contexts.