terminal
howtonotcode.com
ChatGPT logo

ChatGPT

Ai Tool

A conversational AI model that generates human-like text responses.

article 28 storys calendar_today First seen: 2025-12-30 update Last seen: 2026-03-03 open_in_new Website menu_book Wikipedia

Resources

Links to check for updates: homepage, feed, or git repo.

home Homepage

Stories

Showing 1-20 of 28

AI agents under attack: prompt injection exploits and new defenses

Enterprises deploying AI assistants and desktop agents face real prompt-injection and safety failures in tools like Copilot, ChatGPT, Grok, and OpenClaw, while new detection methods that inspect LLM internals are emerging to harden defenses. Security researchers show popular assistants can be steered into malware generation, phishing, and data exfiltration via prompt injection and social engineering, with heightened risk when models tap external data sources, as covered in [WebProNews](https://www.webpronews.com/when-your-ai-assistant-turns-against-you-how-hackers-are-weaponizing-copilot-grok-and-chatgpt-to-spread-malware/). Companies are also restricting high-privilege agents like [OpenClaw](https://arstechnica.com/ai/2026/02/openclaw-security-fears-lead-meta-other-ai-firms-to-restrict-its-use/), citing unpredictability and privacy risk, even as OpenAI commits to keep it open source. The fragility extends to retrieval and web-grounded answers: a reporter manipulated [ChatGPT and Google’s AI](https://www.bbc.com/future/article/20260218-i-hacked-chatgpt-and-googles-ai-and-it-only-took-20-minutes?_bhlid=fca599b94127e0d5009ae7449daf996994809fc2) with a single blog post, underscoring the ease of large-scale influence. AppSec leaders are already reframing strategy for AI-era vulns, as flagged by [The New Stack](https://thenewstack.io/ai-agents-appsec-strategy/). Beyond I/O filters, Zenity proposes a maliciousness classifier that reads the model’s internal activations to flag manipulative prompts, releasing paper, infra, and cross-domain benchmarks to foster “agentic security” practices, detailed by [Zenity Labs](https://labs.zenity.io/p/looking-inside-a-maliciousness-classifier-based-on-the-llm-s-internals).

calendar_today 2026-02-20
microsoft-copilot grok chatgpt openclaw openai

Ship an AI RFP-scoring pipeline with n8n + Gemini, and mind the file limits (vs ChatGPT)

You can automate RFP scoring and spreadsheet analysis with Gemini today using n8n, while planning around concrete file-format and size limits across Gemini and ChatGPT. An end-to-end n8n workflow shows how to accept vendor PDFs via a form webhook, fetch the RFP from Drive, extract text, merge both streams, call the Gemini API with a structured prompt to return JSON scores, and append results to Sheets—plus Drive auth scopes and download details like alt=media are covered in this guide ([n8n + Gemini RFP evaluation](https://dev.to/hackceleration/building-ai-powered-rfp-evaluation-with-n8n-and-google-gemini-pf5)). For data handling at scale, Gemini supports XLS/XLSX/CSV/TSV and Google Sheets; Gemini chat allows up to 10 files per prompt at 100 MB each, while the Files API permits up to 2 GB per file and 20 GB per project for 48 hours—useful for batch or programmatic flows ([Gemini spreadsheet upload and limits](https://www.datastudios.org/post/google-gemini-spreadsheet-uploading-excel-and-csv-support-data-analysis-capabilities-formula-hand)). If you compare providers, ChatGPT accepts many document and data types but caps file size at 512 MB (with spreadsheet practical limits around ~50 MB) and also enforces token and image-specific ceilings, which can influence provider selection for large artifacts ([ChatGPT file upload limits](https://www.datastudios.org/post/chatgpt-file-uploading-capabilities-supported-file-types-upload-size-limits-rules-and-document-r)).

calendar_today 2026-02-17
google-gemini n8n google-drive google-sheets google-files-api

Securing non‑human access: GTIG threat trends, JIT AuthZ, and ChatGPT Lockdown Mode

Attackers are leveraging AI and non-human identities at scale, pushing teams to adopt zero-trust patterns like just-in-time authorization and tool constraints to curb data exfiltration and misuse. Google’s Threat Intelligence Group reports rising model extraction (distillation) attempts and broader AI-augmented phishing and recon across multiple state actors, though no breakthrough attacker capability has yet emerged; see their latest findings for concrete patterns defenders should anticipate and disrupt ([GTIG AI Threat Tracker](https://cloud.google.com/blog/topics/threat-intelligence/distillation-experimentation-integration-ai-adversarial-use?_bhlid=e8c3bb888ecba50d9cd632ef6e7caa0d1a96f294)). A complementary zero-trust lens for agentic systems is outlined in this short talk on hardening agent permissions and egress ([Securing AI Agents with Zero Trust](https://www.youtube.com/watch?v=d8d9EZHU7fw&_bhlid=2d86e48f55bcb7e2838f5fae2b06083739cea245)). For API backends, tightening non-human access is urgent: adopt just-in-time OAuth patterns to eliminate “ghost” and “zombie” identities and shorten token lifetimes, as detailed in this practical guide to adapting OAuth for agents and services ([Just-in-Time Authorization](https://nordicapis.com/just-in-time-authorization-securing-the-non-human-internet/)). On the tooling side, OpenAI introduced ChatGPT Lockdown Mode to deterministically restrict risky integrations (e.g., browsing limited to cached content) and added “Elevated Risk” labels for sensitive capabilities ([Lockdown Mode and Elevated Risk](https://links.tldrnewsletter.com/sJL9w6)), while the open-source [llm-authz-audit](https://github.com/aiauthz/llm-authz-audit?_bhlid=a9fa546b051a3f05f59975ca296c7abd0f224afe) scanner helps catch missing rate limits, leaked creds, and prompt-injection surfaces in CI before deployment.

calendar_today 2026-02-17
openai chatgpt chatgpt-enterprise chatgpt-edu chatgpt-for-healthcare

Proof-of-training for XGBoost meets rising AI data opt-outs

Zero-knowledge proofs for XGBoost training are becoming practical just as consumer AI data opt-outs surge, pushing teams to verify models without exposing data and to enforce consent-aware pipelines. [ZKBoost delivers a zero-knowledge proof-of-training for XGBoost via a fixed-point implementation and CertXGB, achieving ~1% accuracy delta and practical verification on real datasets](https://quantumzeitgeist.com/ai-machine-learning-privacy-preserving-system-verifies-without/)[^1]. [Meanwhile, reports detail mounting 'AI opt-out' friction at Google and Meta that complicates consent and governance for training pipelines](https://www.webpronews.com/the-great-ai-opt-out-why-millions-are-racing-to-pull-their-data-from-google-meta-and-the-machine-learning-pipeline/)[^2]. [^1]: Explains zkPoT for XGBoost, fixed-point arithmetic, CertXGB, VOLE instantiation, and ~1% accuracy gap on real data. [^2]: Describes user opt-out trends, buried settings, GDPR vs. U.S. gaps, and implications for training data consent.

calendar_today 2026-02-10
xgboost zkboost certxgb google meta

OpenAI’s next wave: GPT-5, AI-built models, and a $40B push

OpenAI is pairing renewed ChatGPT growth with an imminent model upgrade and AI-assisted model development, signaling a faster cadence toward GPT-5 and higher enterprise reliability. Altman flagged >10% monthly ChatGPT growth, a $40B round, ads, and an imminent model update to counter Anthropic’s coding gains in an internal push for momentum ([OpenAI’s Growth Gambit](https://www.webpronews.com/openais-growth-gambit-inside-sam-altmans-push-to-reclaim-momentum-as-chatgpt-hits-a-pivotal-inflection-point/))[^1]. WebProNews outlines GPT-5’s expected leap in reasoning, multimodality, and stability for enterprises, alongside OpenAI’s disclosure that its newest frontier model was substantially built using its own AI systems ([GPT-5 and the Great AI Arms Race](https://www.webpronews.com/openais-gpt-5-and-the-great-ai-arms-race-why-the-next-generation-of-language-models-could-reshape-enterprise-computing/))[^2] and ([The Ouroboros Moment](https://www.webpronews.com/the-ouroboros-moment-openai-says-its-newest-ai-was-built-by-ai-itself-and-the-industry-is-taking-notice/))[^3]. [^1]: Adds: internal growth, funding scale/valuation, ads, and “imminent model update” context vs Anthropic. [^2]: Adds: what GPT-5 aims to improve (reasoning, context, multimodal) and enterprise implications. [^3]: Adds: AI-built-AI development details and safety/oversight considerations.

calendar_today 2026-02-09
openai chatgpt gpt-5 anthropic softbank

UK/NY AI rules meet adversarial safety: what backend/data teams must change

AI governance is shifting from voluntary guidelines to binding obligations while labs formalize adversarial and constitutional safety methods, raising new requirements for evaluation, logging, and incident reporting. The UK is proposing mandatory registration, pre‑release safety testing, and incident reporting for frontier models enforced via the AI Safety Institute, moving beyond voluntary pledges [Inside the Scramble to Tame AI: Why the UK’s New Regulatory Push Could Reshape the Global Tech Order](https://www.webpronews.com/inside-the-scramble-to-tame-ai-why-the-uks-new-regulatory-push-could-reshape-the-global-tech-order/)[^1]. New York is advancing transparency and impact‑assessment bills for high‑risk AI decisions [Albany’s AI Reckoning: Inside New York’s Ambitious Bid to Become America’s Toughest Regulator of Artificial Intelligence](https://www.webpronews.com/albanys-ai-reckoning-inside-new-yorks-ambitious-bid-to-become-americas-toughest-regulator-of-artificial-intelligence/)[^2], while labs push adversarial reasoning and constitutional alignment to harden model behavior [Inside Adversarial Reasoning: How AI Labs Are Teaching Models to Think by Fighting Themselves](https://www.webpronews.com/inside-adversarial-reasoning-how-ai-labs-are-teaching-models-to-think-by-fighting-themselves/)[^3] [Thoughts on Claude's Constitution](https://windowsontheory.org/2026/01/27/thoughts-on-claudes-constitution/ct assessments, and penalties. [^3]: Explains adversarial debate/self‑play and automated red‑teaming as next‑gen training/eval methods. [^4]: An OpenAI researcher’s critique of Anthropic’s Claude Constitution and implications for alignment practice.

calendar_today 2026-02-09
openai anthropic google-deepmind meta uk-ai-safety-institute

Opus 4.6 Agent Teams vs GPT-5.3 Codex: multi‑agent coding arrives for real SDLC work

Anthropic's Claude Opus 4.6 brings multi-agent "Agent Teams" and a 1M-token context while OpenAI's GPT-5.3-Codex counters with faster, stronger agentic coding, together signaling a step change in AI-assisted development. Opus 4.6 adds team-based parallelization in Claude Code, long‑context retrieval gains, adaptive reasoning/effort controls, and Office sidebars, with pricing unchanged [Data Points](https://www.deeplearning.ai/the-batch/claude-opus-4-6-pushes-the-envelope/)[^1] and launch coverage framing initial benchmark leads at release [AI Collective](https://aicollective.substack.com/p/the-brief-anthropics-opus-46-agent)[^2]. OpenAI’s GPT‑5.3‑Codex posts top results on SWE‑Bench Pro and Terminal‑Bench 2.0 and helped debug its own training pipeline [Data Points](https://www.deeplearning.ai/the-batch/claude-opus-4-6-pushes-the-envelope/)[^3], while practitioners surface Claude Code’s new Auto‑Memory behavior/controls for safer long‑running projects [Reddit](https://www.reddit.com/r/ClaudeCode/comments/1qzmofn/how_claude_code_automemory_works_official_feature/)[^4] and Anthropic leaders say AI now writes nearly all their internal code [India Today](https://www.indiatoday.in/technology/news/story/anthropic-says-ai-writing-nearly-100-percent-code-internally-claude-basically-writes-itself-now-2865644-2026-02-09)[^5]. [^1]: Adds: Opus 4.6 features (1M context), long‑context results, adaptive/effort/compaction API controls, and unchanged pricing. [^2]: Adds: Agent Teams in Claude Code, Office (Excel/PowerPoint) sidebars, 1M context, and benchmark framing at launch. [^3]: Adds: GPT‑5.3‑Codex benchmarks, 25% speedup, availability, and self‑use in OAI’s training/deployment pipeline. [^4]: Adds: Concrete Auto‑Memory details (location, 200‑line cap) and disable flag for policy compliance. [^5]: Adds: Real‑world claim of near‑100% AI‑written internal code at Anthropic, indicating mature SDLC use.

calendar_today 2026-02-09
anthropic openai claude-opus-46 claude-code gpt-53-codex

OpenAI recommends GPT-5.3-Codex as the default agentic coding model

OpenAI now recommends GPT-5.3-Codex as the default Codex model, signaling a step-up in agentic coding and reasoning for real-world engineering. The official Codex Models page highlights GPT-5.3-Codex as the most capable, with GPT-5.2-Codex as predecessor and a smaller GPT-5.1-Codex-mini option for cost-sensitive tasks [OpenAI Codex Models](https://developers.openai.com/codex/models/)[^1]. An anecdotal report describes spending $10,000 to automate research with Codex, indicating emerging large-scale usage patterns [Practitioner report](https://links.tldrnewsletter.com/J7poJAf substantial Codex-driven automation and spend.

calendar_today 2026-02-07
openai codex gpt-53-codex gpt-52-codex gpt-51-codex-mini

OpenAI ships GPT-5.3-Codex into IDEs, terminals, web, and a macOS app

OpenAI launched GPT-5.3-Codex, a faster coding model now embedded in IDEs, the terminal, web, and a macOS app, with early claims it assisted in building itself. OpenAI details ~25% faster runs, stronger SWE-Bench/Terminal-Bench results, and broad distribution via CLI, IDE extensions, web, and a new macOS app in the announcement [Introducing GPT‑5.3‑Codex](https://openai.com/index/introducing-gpt-5-3-codex/)[^1]. Coverage notes all paid ChatGPT plans can access it now, API access is coming, and the team used Codex to debug, manage deployment, and evaluate results during its own development [TechRadar report](https://www.techradar.com/pro/openai-unveils-gpt-5-3-codex-which-can-tackle-more-advanced-and-complex-coding-tasks)[^2], with additional workflow and positioning details on distribution and SDLC scope [AI News Hub](https://www.chatai.com/posts/openai-pushes-codex-deeper-into-developer-workflows-with-gpt-5-3-codex-release)[^3]. [^1]: Adds: Official feature, performance, and distribution overview. [^2]: Adds: Access paths (paid ChatGPT plans), benchmarks, and "built itself" context. [^3]: Adds: Deeper coverage of IDE/CLI/macOS integration, speedup figure, and API timing.

calendar_today 2026-02-07
openai gpt-53-codex chatgpt codex-macos-app gpt-5-3-codex

ChatGPT-4o API endpoint deprecation slated for Feb 17, 2026

An OpenAI community thread flags the planned deprecation of the ChatGPT-4o API endpoint on Feb 17, 2026, with user feedback highlighting migration and compatibility concerns—start planning for replacements and breakage now ([Feedback on Deprecation of ChatGPT-4o Feb 17, 2026 API Endpoint](https://community.openai.com/t/feedback-on-deprecation-of-chatgpt-4o-feb-17-2026-api-endpoint/1372477#post_20)[^1]). For backend/data pipelines, inventory where 4o is used, pin model versions, and run dual-write/dual-run evaluations to validate behavior, latency, and cost before switching.

calendar_today 2026-02-04
openai chatgpt-4o openai-api api-versioning llm-ops

Sam Altman: Move Fast on AI Agents or Fall Behind

OpenAI CEO Sam Altman urged enterprises to rapidly adopt AI "workers" and agentic tooling, warning that organizations not set up for this shift will be at a major disadvantage and should expect some work and risk in rollout ([TechRadar coverage](https://www.techradar.com/pro/companies-that-are-not-set-up-to-quickly-adopt-ai-workers-will-be-at-a-huge-disadvantage-openai-sam-altman-warns-firms-not-to-fall-behind-on-ai-but-notes-its-going-to-take-a-lot-of-work-and-some-risk)[^1]). He highlighted accelerating model capability and agent patterns (e.g., tools with computer/browser access) as key to productivity gains and predicted substantial improvement in model quality by 2026. [^1]: Adds: Summary of Altman's remarks on enterprise AI adoption urgency, agentic automation potential (computer/browser access), and expected model improvements.

calendar_today 2026-02-04
openai chatgpt codex cisco ai-agents

ChatGPT Apps SDK: Lessons on State, Data Fetching, and Backend Guardrails

Early field lessons from building dozens of ChatGPT Apps show that conventional web patterns—like just-in-time data fetching, UI-driven state, and heavy user configuration—often degrade agentic UX, pushing teams toward prefetching, server-owned state, and clearer tool contracts ([15 lessons](https://developers.openai.com/blog/15-lessons-building-chatgpt-apps)[^1]). Community threads surface real-world patterns and rough edges—from cross-domain builds and game dev agent tips to an unintended widget re-render issue—underscoring the need for idempotent backends and careful state handling ([community showcase](https://community.openai.com/t/show-us-what-you-re-building-with-the-chatgpt-apps-sdk/1365862?page=4#post_74)[^2], [game dev integration](https://community.openai.com/t/ai-in-game-development-gamedev-tips-tools-techniques-and-gpt-llm-agent-integration/1372841?page=2#post_44)[^3], [widget re-render bug](https://community.openai.com/t/re-rendering-of-widget-unintentionally/1367406#post_22)[^4]). [^1]: Field report with 15 lessons; warns that JIT fetching, UI-driven state, and explicit user config can harm agentic UX. [^2]: Community builds across domains; examples of real integrations with the Apps SDK. [^3]: Integration tips for LLM agents in game development; patterns that generalize to other domains. [^4]: Reports unintended widget re-renders in Apps SDK; implications for state and duplicate tool calls.

calendar_today 2026-02-04
openai chatgpt chatgpt-apps-sdk agentic-workflows state-management

OpenAI ships Codex macOS app: multi-agent command center with git worktrees and skills

OpenAI introduced the macOS-only Codex app as a "command center" to run multiple coding agents in parallel, isolate work via git worktrees, and extend workflows with a new Skills system—plus a limited-time inclusion with ChatGPT Free/Go and doubled rate limits for paid plans ([OpenAI blog](https://openai.com/index/introducing-the-codex-app/?_bhlid=b040462c226c34eb9531cc536689e69b976397a7)[^1]). Developer docs confirm Apple Silicon support today, a Windows/Linux waitlist, and that API-key sign-in may limit features like cloud threads ([Codex app docs](https://developers.openai.com/codex/app/)[^2]). Reporting adds competitive context against Anthropic’s Code Cowork/Claude Code and notes model guidance (use GPT‑5.2‑Codex for coding) and multi-agent monitoring aimed at centralizing team workflows ([Fortune](https://fortune.com/2026/02/02/openai-launches-codex-app-to-bring-coding-models-to-more-users-openclaw-ai-agents/)[^3]). [^1]: Adds: official product details on multi-agent orchestration, git worktrees, Skills, and rate limit changes. [^2]: Adds: confirms macOS-only (Apple Silicon), Windows/Linux waitlist, and API-key limitations for cloud threads. [^3]: Adds: market context vs Anthropic, enterprise adoption, model recommendations, and multi-agent monitoring pitch.

calendar_today 2026-02-03
openai codex-app gpt-52-codex chatgpt anthropic

OpenAI Codex ships macOS app with parallel agents, Plan mode, and higher limits

OpenAI released a macOS Codex app that runs parallel agent threads for long‑running work with built‑in Git/worktrees, skills, automations, and temporarily higher rate limits across app/CLI/IDE for paid tiers ([Codex changelog](https://developers.openai.com/codex/changelog/)[^1]). The latest release enables Plan mode by default, stabilizes personality config, supports loading skills from .agents/skills, and surfaces runtime metrics for diagnostics ([v0.94.0 release](https://github.com/openai/codex/releases/tag/rust-v0.94.0)[^2]). OpenAI is positioning Codex for autonomous, multi‑threaded, complex tasks vs. Claude Code, citing 1M monthly users and 20x growth since August, while community reports mention a large context window (unconfirmed) ([Sources newsletter](https://sources.news/p/openai-takes-aim-at-anthropics-coding)[^3], [Reddit thread](https://www.reddit.com/r/OpenAI/comments/1qu7hii/openai_just_massdeployed_codex_to_every_surface/)[^4]). [^1]: Official feature overview and rate-limit details. [^2]: Release notes (Plan mode default, skills folder support, personality, metrics). [^3]: Press briefing recap with positioning vs. Claude Code and usage stats. [^4]: Community summary noting "trinity" surfaces and context-size claim (unverified).

calendar_today 2026-02-03
openai codex chatgpt anthropic claude-code