terminal
howtonotcode.com
The New Stack logo

The New Stack

Company

In computer science, a stack is an abstract data type that serves as a collection of elements with two main operations: Push, which adds an element to the collection, and Pop, which removes the most recently added element. Additionally, a peek operation can, without modifying the stack, return the value of the last element added (the item at the top of the stack). The name stack is an analogy to a set of physical items stacked one atop another, such as a stack of plates. The order in which an

article 8 storys calendar_today First seen: 2026-02-10 update Last seen: 2026-03-03 menu_book Wikipedia

Stories

Showing 1-8 of 8

OpenClaw rockets to GitHub’s top spot—security and ops readiness now in focus

OpenClaw, an open-source legal AI project, has surged to GitHub’s most-starred status while raising fresh security and governance questions for teams considering adoption. A [WebProNews report](https://www.webpronews.com/openclaws-meteoric-rise-on-github-how-an-open-source-legal-ai-project-dethroned-react-as-the-most-starred-software-repository/) says OpenClaw has overtaken React in stars, propelled by its structured legal datasets and AI tooling that promise to democratize access and fuel model training. The New Stack urges caution on provenance and security in “is it safe?” coverage, flagging supply-chain and governance risks before production use ([read more](https://thenewstack.io/openclaw-github-stars-security/)). A March update video highlights Docker support, cron job fixes, and how-to-upgrade guidance—plus references to Claude 4.6 “Adaptive Thinking”—signaling quickening operational maturity and clearer integration touchpoints ([watch](https://www.youtube.com/watch?v=4K1JRI7xA08&pp=ygUSQ2xhdWRlIENvZGUgdXBkYXRl)).

calendar_today 2026-03-03
openclaw github claude docker security

OpenAI rolls out GPT-5.3 Instant and 5.3-Codex to the API

OpenAI released GPT-5.3 Instant with faster, more grounded responses and made it available via the API alongside the new 5.3-Codex for code tasks. [OpenAI’s system card](https://openai.com/index/gpt-5-3-instant-system-card/) describes GPT‑5.3 Instant as quicker, better at contextualizing web-sourced answers, and less likely to derail into caveats, with safety mitigations largely unchanged from 5.2. Developer posts indicate the API model is exposed as [gpt-5.3-chat-latest](https://community.openai.com/t/api-model-gpt-5-3-chat-latest-available-aka-instant-on-chatgpt/1375606) (aka “instant” in ChatGPT) and introduce [GPT‑5.3‑Codex](https://community.openai.com/t/introducing-gpt-5-3-codex-the-most-powerful-interactive-and-productive-codex-yet/1373453) for stronger code generation, while industry coverage notes it “dials down the cringe” in chat flow ([The New Stack](https://thenewstack.io/openai-gpt-5-1-instant/)).

calendar_today 2026-03-03
openai gpt-53-instant gpt-53-codex chatgpt openai-api

AI agents under attack: prompt injection exploits and new defenses

Enterprises deploying AI assistants and desktop agents face real prompt-injection and safety failures in tools like Copilot, ChatGPT, Grok, and OpenClaw, while new detection methods that inspect LLM internals are emerging to harden defenses. Security researchers show popular assistants can be steered into malware generation, phishing, and data exfiltration via prompt injection and social engineering, with heightened risk when models tap external data sources, as covered in [WebProNews](https://www.webpronews.com/when-your-ai-assistant-turns-against-you-how-hackers-are-weaponizing-copilot-grok-and-chatgpt-to-spread-malware/). Companies are also restricting high-privilege agents like [OpenClaw](https://arstechnica.com/ai/2026/02/openclaw-security-fears-lead-meta-other-ai-firms-to-restrict-its-use/), citing unpredictability and privacy risk, even as OpenAI commits to keep it open source. The fragility extends to retrieval and web-grounded answers: a reporter manipulated [ChatGPT and Google’s AI](https://www.bbc.com/future/article/20260218-i-hacked-chatgpt-and-googles-ai-and-it-only-took-20-minutes?_bhlid=fca599b94127e0d5009ae7449daf996994809fc2) with a single blog post, underscoring the ease of large-scale influence. AppSec leaders are already reframing strategy for AI-era vulns, as flagged by [The New Stack](https://thenewstack.io/ai-agents-appsec-strategy/). Beyond I/O filters, Zenity proposes a maliciousness classifier that reads the model’s internal activations to flag manipulative prompts, releasing paper, infra, and cross-domain benchmarks to foster “agentic security” practices, detailed by [Zenity Labs](https://labs.zenity.io/p/looking-inside-a-maliciousness-classifier-based-on-the-llm-s-internals).

calendar_today 2026-02-20
microsoft-copilot grok chatgpt openclaw openai

Intelligent orchestration in the AI era: GitLab's SDLC pitch

GitLab argues that intelligent orchestration across the SDLC is key to turning AI promise into predictable, secure delivery. A recent New Stack piece outlines GitLab’s view that the “AI paradox” (big potential, messy outcomes) is best addressed by consolidating workflows and enforcing policy-driven orchestration across the pipeline, rather than adding more disconnected tools ([The New Stack: How intelligent orchestration transforms software innovation](https://thenewstack.io/ai-paradox-gitlab/)). The emphasis is on governing how and when AI is used in coding, testing, and release, with automated gates, standardized telemetry, and clear ownership. For engineering leaders, the takeaway is to treat AI as a capability embedded in DevSecOps—policies, change-aware automation, and feedback loops—so teams can reduce cycle time without losing control over cost, quality, and compliance.

calendar_today 2026-02-12
gitlab cicd devsecops ai-orchestration policy-as-code

Copilot CLI stabilizes for long sessions as IDEs move to agentic, team‑scoped AI

GitHub Copilot CLI’s latest update focuses on memory reductions and long‑session stability while IDE workflows and AI agents mature around team‑level customization and modernization tasks. GitHub Copilot CLI v0.0.410 ships broad stability improvements—fixing high memory usage under rapid logging, reducing streaming overhead, improving long‑session compaction, and adding ergonomic shell features like Ctrl+Z suspend/resume, Page Up/Down scrolling, repo‑level validation toggles, and an IDE status indicator when connected ([release notes](https://github.com/github/copilot-cli/releases)). The momentum aligns with a wider agentic shift: The New Stack frames VS Code as a “multi‑agent command center” for developers ([coverage](https://thenewstack.io/vs-code-becomes-multi-agent-command-center-for-developers/)), and Microsoft’s Copilot App Modernization details AI agents that assess, upgrade, containerize, and deploy .NET/Java apps to Azure in days ([deep dive](https://itnext.io/how-microsoft-is-using-ai-agents-to-turn-8-month-app-modernizations-into-days-a-technical-deep-8340a33513e7)). For IDE standardization, JetBrains/Android Studio Copilot customizations support workspace‑scoped settings committed under .github so teams can share constraints and conventions across projects ([guide](https://www.telefonica.com/en/communication-room/blog/github-copilot-android-studio-customization/)); also watch cost dynamics—one report shows OpenCode using far more credits than Copilot CLI for the same prompt, warranting usage instrumentation and policy checks ([user report](https://www.reddit.com/r/GithubCopilot/comments/1r2fhs2/opencode_vs_github_copilot_cli_huge_credit_usage/)).

calendar_today 2026-02-12
github-copilot-cli github visual-studio-code android-studio jetbrains

OpenAI Codex-Spark debuts on Cerebras for near-instant agentic coding

OpenAI launched GPT-5.3-Codex-Spark, a fast, steerable coding model served on Cerebras hardware to deliver near-instant responses for real-time agentic development. OpenAI and Cerebras unveiled a research preview of Codex-Spark aimed at live, iterative coding with responsiveness over 1,000 tokens/s, enabled by the Cerebras Wafer-Scale Engine, and designed to keep developers “in the loop” during agentic work [Cerebras announcement](https://www.cerebras.ai/blog/openai-codexspark). Independent coverage frames this as OpenAI’s first major inference move beyond Nvidia, positioning Cerebras for ultra-low-latency workloads while acknowledging capability tradeoffs versus the full GPT‑5.3‑Codex on autonomous engineering benchmarks [VentureBeat](https://venturebeat.com/technology/openai-deploys-cerebras-chips-for-15x-faster-code-generation-in-first-major) and broader speed-focused reporting [The New Stack](https://thenewstack.io/openais-new-codex-spark-is-optimized-for-speed/). On the tooling front, the openai/codex v0.99.0 release adds app‑server APIs for steering active turns, enterprise controls via requirements.toml (e.g., web search modes, network constraints), improved TUI flows, and concurrent shell command execution—useful for orchestrating agent runs with higher control and safety [GitHub release notes](https://github.com/openai/codex/releases/tag/rust-v0.99.0). For adoption patterns, a practical guide outlines “agent‑first engineering” using Codex CLI/IDE, cloud sandboxes for parallel tasks, an SDK for programmatic control, and GitHub Actions to plug agents into CI/CD with clear definitions of “done” [agentic workflow guide](https://www.gend.co/fr/blog/codex-agent-first-engineering).

calendar_today 2026-02-12
openai cerebras-systems nvidia gpt-53-codex-spark gpt-53-codex

AI dev productivity paradox: slower shipping, rising agentic platforms

Evidence from enterprises suggests current AI assistants aren’t reducing workload or accelerating shipping, and attention is shifting toward agentic platforms to tackle end‑to‑end bottlenecks. A new study argues AI often increases pace/volume expectations rather than cutting toil, contributing to burnout and coordination overheads, especially in knowledge work [Harvard Business Review](https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it?_bhlid=e06e6cf6369dfa37004e0d3936320258b0838494)[^1]. GitLab’s CEO says AI isn’t yet helping large orgs ship code faster, citing integration, governance, and review bottlenecks [The New Stack](https://thenewstack.io/gitlab-ceo-on-why-ai-isnt-helping-enterprise-ship-code-faster/)[^2], while GitHub’s former CEO is launching a developer platform aimed at agentic coding, signaling a pivot from autocomplete to workflow‑level automation [The New Stack](https://thenewstack.io/thomas-dohmke-interview-entire/)[^3]. [^1]: Adds: Research-backed perspective that AI can intensify work rather than reduce it, framing the productivity paradox. [^2]: Adds: Executive view from GitLab on why AI hasn’t improved enterprise shipping speed (process, governance, and integration constraints). [^3]: Adds: Signals market shift toward agentic/closed-loop developer platforms targeting end-to-end workflow automation.

calendar_today 2026-02-10
gitlab github agentic-coding developer-productivity sdlc

AI coding boosts some tasks by 56% but slows others by 19%

AI coding assistants can make developers about 56% faster on some tasks but about 19% slower on others, indicating uneven productivity gains that depend on task type and context. A summary from The New Stack reviews evidence behind these mixed effects and offers practical nuance on when AI helps versus hurts ([How AI coding makes developers 56% faster and 19% slower](https://thenewstack.io/how-ai-coding-makes-developers-56-faster-and-19-slower/)[^1]). [^1]: Adds: Concise survey of studies and practitioner observations quantifying speed-ups and slow-downs.

calendar_today 2026-02-09
the-new-stack ai-coding-assistants developer-productivity code-generation testing