terminal
howtonotcode.com
Zenity logo

Zenity

Company

Zenity is a free software and cross-platform computer program that allows the execution of GTK dialog boxes in command-line and shell scripts.

article 2 storys calendar_today First seen: 2026-02-10 update Last seen: 2026-02-20 open_in_new Website menu_book Wikipedia

Resources

Links to check for updates: homepage, feed, or git repo.

home Homepage

Stories

Showing 1-2 of 2

AI agents under attack: prompt injection exploits and new defenses

Enterprises deploying AI assistants and desktop agents face real prompt-injection and safety failures in tools like Copilot, ChatGPT, Grok, and OpenClaw, while new detection methods that inspect LLM internals are emerging to harden defenses. Security researchers show popular assistants can be steered into malware generation, phishing, and data exfiltration via prompt injection and social engineering, with heightened risk when models tap external data sources, as covered in [WebProNews](https://www.webpronews.com/when-your-ai-assistant-turns-against-you-how-hackers-are-weaponizing-copilot-grok-and-chatgpt-to-spread-malware/). Companies are also restricting high-privilege agents like [OpenClaw](https://arstechnica.com/ai/2026/02/openclaw-security-fears-lead-meta-other-ai-firms-to-restrict-its-use/), citing unpredictability and privacy risk, even as OpenAI commits to keep it open source. The fragility extends to retrieval and web-grounded answers: a reporter manipulated [ChatGPT and Google’s AI](https://www.bbc.com/future/article/20260218-i-hacked-chatgpt-and-googles-ai-and-it-only-took-20-minutes?_bhlid=fca599b94127e0d5009ae7449daf996994809fc2) with a single blog post, underscoring the ease of large-scale influence. AppSec leaders are already reframing strategy for AI-era vulns, as flagged by [The New Stack](https://thenewstack.io/ai-agents-appsec-strategy/). Beyond I/O filters, Zenity proposes a maliciousness classifier that reads the model’s internal activations to flag manipulative prompts, releasing paper, infra, and cross-domain benchmarks to foster “agentic security” practices, detailed by [Zenity Labs](https://labs.zenity.io/p/looking-inside-a-maliciousness-classifier-based-on-the-llm-s-internals).

calendar_today 2026-02-20
microsoft-copilot grok chatgpt openclaw openai

Cisco open-sources CodeGuard as research flags predictable LLM code flaws

Cisco donated its CodeGuard security framework to OASIS’s Coalition for Secure AI as new research shows LLM code assistants repeat predictable vulnerabilities, raising the bar for secure-by-default AI coding workflows. Details of the open donation and integration targets (Cursor, Copilot, Windsurf, Claude Code) are in OASIS Open’s announcement [Cisco Donates Project CodeGuard to Coalition for Secure AI](https://www.oasis-open.org/2026/02/09/cisco-donates-project-codeguard-to-coalition-for-secure-ai/)[^1]. Complementary research findings show vulnerability persistence and a black-box FSTab method with up to 94% attack success on LLM-generated apps [AI Code Generation Tools Repeat Security Flaws, Creating Predictable Software Weaknesses](https://quantumzeitgeist.com/ai-security-code-generation-tools-repeat-flaws/)[^2], with broader context on latent backdoors in “clean” AI code [Backdoors With Manners](https://hackernoon.com/backdoors-with-manners-when-ai-writes-clean-code-that-turns-malicious-later?source=rss)[^3] and sector-specific safety layers emerging in healthcare [Inside Guardrails AI](https://www.webpronews.com/inside-guardrails-ai-how-a-seattle-startup-is-deploying-clinical-expertise-to-neutralize-the-most-dangerous-failures-in-artificial-intelligence/)[^4]. [^1]: Adds: Official details on CodeGuard scope, integrations, and governance via CoSAI. [^2]: Adds: Research summary explaining FSTab, vulnerability recurrence metrics, and attack success rates. [^3]: Adds: Perspective on behavioral trojans and delayed-malicious code patterns. [^4]: Adds: Example of domain-specific safety guardrails in production contexts.

calendar_today 2026-02-09
cisco oasis-open coalition-for-secure-ai-cosai project-codeguard cursor