terminal
howtonotcode.com
Google Threat Intelligence Group (GTIG) logo

Google Threat Intelligence Group (GTIG)

Service

ShinyHunters is a notorious black-hat criminal hacker and extortion group that is believed to have formed in 2019, and is said to have been involved in a massively significant amount of data breaches. The group often extorts the company they've hacked, if the company does not pay the ransom the stolen information is sold or often leaked on the dark web.

article 2 storys calendar_today First seen: 2026-02-14 update Last seen: 2026-02-17 menu_book Wikipedia

Stories

Showing 1-2 of 2

Securing non‑human access: GTIG threat trends, JIT AuthZ, and ChatGPT Lockdown Mode

Attackers are leveraging AI and non-human identities at scale, pushing teams to adopt zero-trust patterns like just-in-time authorization and tool constraints to curb data exfiltration and misuse. Google’s Threat Intelligence Group reports rising model extraction (distillation) attempts and broader AI-augmented phishing and recon across multiple state actors, though no breakthrough attacker capability has yet emerged; see their latest findings for concrete patterns defenders should anticipate and disrupt ([GTIG AI Threat Tracker](https://cloud.google.com/blog/topics/threat-intelligence/distillation-experimentation-integration-ai-adversarial-use?_bhlid=e8c3bb888ecba50d9cd632ef6e7caa0d1a96f294)). A complementary zero-trust lens for agentic systems is outlined in this short talk on hardening agent permissions and egress ([Securing AI Agents with Zero Trust](https://www.youtube.com/watch?v=d8d9EZHU7fw&_bhlid=2d86e48f55bcb7e2838f5fae2b06083739cea245)). For API backends, tightening non-human access is urgent: adopt just-in-time OAuth patterns to eliminate “ghost” and “zombie” identities and shorten token lifetimes, as detailed in this practical guide to adapting OAuth for agents and services ([Just-in-Time Authorization](https://nordicapis.com/just-in-time-authorization-securing-the-non-human-internet/)). On the tooling side, OpenAI introduced ChatGPT Lockdown Mode to deterministically restrict risky integrations (e.g., browsing limited to cached content) and added “Elevated Risk” labels for sensitive capabilities ([Lockdown Mode and Elevated Risk](https://links.tldrnewsletter.com/sJL9w6)), while the open-source [llm-authz-audit](https://github.com/aiauthz/llm-authz-audit?_bhlid=a9fa546b051a3f05f59975ca296c7abd0f224afe) scanner helps catch missing rate limits, leaked creds, and prompt-injection surfaces in CI before deployment.

calendar_today 2026-02-17
openai chatgpt chatgpt-enterprise chatgpt-edu chatgpt-for-healthcare

Gemini Deep Think: research gains, CLI workflows, and model-extraction risks

Google’s Gemini Deep Think is graduating from contests to real research and developer workflows, but its growing capability is also attracting copycat extraction and criminal abuse that teams must plan around. Google DeepMind details how Gemini Deep Think, guided by experts, is tackling professional math and science problems using an agent (Aletheia) that iteratively generates, verifies, revises, and even browses to avoid spurious citations, with results improving as inference-time compute scales and outperforming prior Olympiad-level benchmarks ([Google DeepMind](https://deepmind.google/blog/accelerating-mathematical-and-scientific-discovery-with-gemini-deep-think/?_bhlid=c06248275cf06add0c919aabac361f98ed7c1e95)). A broader industry pulse notes the release’s framing and early user anecdotes around “Gemini 3 Deep Think” appearing in the wild ([Simon Willison’s Weblog](https://simonwillison.net/2026/Feb/12/gemini-3-deep-think/#atom-everything)). For context on user expectations, this differs from Google Search’s ranking-first paradigm—Gemini aims for single-response reasoning rather than surfacing diverse sources ([DataStudios](https://www.datastudios.org/post/why-does-gemini-give-different-answers-than-google-search-reasoning-versus-ranking-logic)). For day-to-day engineering, a terminal-native Gemini CLI is emerging to integrate AI directly into developer workflows—writing files, chaining commands, and automating tasks without browser context switching, which can accelerate prototyping, code generation, and research summarization in-place ([Gemini CLI guide](https://atalupadhyay.wordpress.com/2026/02/12/gemini-cli-from-first-steps-to-advanced-workflows/)). Security posture must catch up: Google reports adversaries tried to clone Gemini via high-volume prompting (>100,000 prompts in one session) to distill its behavior, and separate threat intel highlights rising criminal use of Gemini for phishing, malware assistance, and reconnaissance—underscoring the need for rate limits, monitoring, and policy controls around model access and outputs ([Ars Technica](https://arstechnica.com/ai/2026/02/attackers-prompted-gemini-over-100000-times-while-trying-to-clone-it-google-says/), [WebProNews](https://www.webpronews.com/from-experimentation-to-exploitation-how-cybercriminals-are-weaponizing-googles-own-ai-tools-against-the-digital-world/)).

calendar_today 2026-02-12
google-deepmind google gemini-deep-think gemini-cli google-search