terminal
howtonotcode.com
TechRadar logo

TechRadar

Company

TechRadar is a technology news and reviews website.

article 9 storys calendar_today First seen: 2026-02-03 update Last seen: 2026-03-03 open_in_new Website menu_book Wikipedia

Resources

Links to check for updates: homepage, feed, or git repo.

home Homepage

rss_feed Feed

Stories

Showing 1-9 of 9

Agentic AI hits production in enterprise workflows

Agentic AI is moving from pilots to production across enterprise workflows, forcing teams to harden data governance, safety controls, and observability. A joint analysis highlights five converging forces shaping the 2026 enterprise—agentic AI, workforce reconfiguration, platform consolidation, data governance, and industry-specific apps—and argues the next 12–18 months are decisive for enterprise-wide integration, not incremental pilots ([Deloitte and ServiceNow](https://www.webpronews.com/the-ai-fueled-enterprise-of-2026-deloitte-and-servicenow-map-the-five-forces-reshaping-corporate-technology-strategy/)). Microsoft is pushing this shift in core business systems as Dynamics 365 moves beyond passive copilots toward autonomous agents that monitor conditions, plan, and execute multi-step workflows across ERP/CRM, raising immediate questions around approvals, rollback, and auditability ([Dynamics 365 agentic AI](https://www.webpronews.com/agentic-ai-comes-to-microsoft-dynamics-365-what-enterprise-software-teams-need-to-know-right-now/)). Broader market signals point to proactive AI—systems that anticipate needs based on long-term memory—becoming normal, exemplified by ChatGPT’s proactive research and Meta’s work on follow-up messaging, which will boost productivity but also amplify trust, bias, and privacy frictions ([TechRadar outlook](https://www.techradar.com/pro/2025-was-the-year-ai-grew-up-how-will-ai-evolve-in-2026)).

calendar_today 2026-03-03
microsoft-dynamics-365 servicenow deloitte microsoft openai

Agentic AI in production: deletion-aware data, audit trails, and supply chain guardrails

Agentic AI is hitting real production surfaces, but making it safe and monetizable now hinges on deletion-aware data models, auditable workflows, and tougher supply chain hygiene. Enterprises are squeezing AI into regulated workflows while facing a privacy paradox of data hunger vs. compliance, pushing teams to make agent decisions explainable and traceable across systems, as outlined in coverage of enterprise privacy pressures and agentic audit needs in [WebProNews](https://www.webpronews.com/the-ai-privacy-paradox-how-enterprises-are-walking-a-tightrope-between-innovation-and-data-protection/) and a practitioner guide on [agentic AI compliance and auditability](https://medium.com/@aiteacher/how-to-achieve-compliance-and-auditability-in-agentic-ai-workflows-beb912b1e759). A practical pattern emerging for paid AI interactions is to separate live threads from immutable “chronicle” snapshots and bind retention to entitlements, so account deletion, TTL jobs, and compliance requests don’t corrupt monetization or auditability—see the deletion-first architecture from this engineering post on [stabilizing AI products via retention authority and immutable assets](https://dev.to/cizo/if-your-ai-product-cant-handle-deletion-it-cant-handle-monetization-46ee). Security posture remains the swing factor: LLMs still pick secure code roughly half the time per [TechRadar](https://www.techradar.com/pro/ai-models-cant-fully-understand-security-and-they-never-will), open-source maintainers are being flooded by AI-agent PRs for "reputation farming," raising supply chain risk per [InfoWorld](https://www.infoworld.com/article/4132851/open-source-maintainers-are-being-targeted-by-ai-agent-as-part-of-reputation-farming.html), and platform policy friction is real as seen in [Manus AI’s Telegram agent suspension](https://www.testingcatalog.com/manus-ai-launched-24-7-agent-via-telegram-and-got-suspended/); yet business pressure to operationalize agents (e.g., “agentic process outsourcing”) is accelerating, per [Forbes](https://www.forbes.com/sites/sanjaysrivastava/2026/02/16/the-coming-of-agentic-process-outsourcing/).

calendar_today 2026-02-17
manus-ai telegram whatsapp meta-ai openclaw

Salesforce pauses Heroku as AI agents rise; adjust autoscaling and pipelines

Vendors are pivoting from traditional PaaS and CI/CD toward agentic platforms, with Salesforce halting new Heroku features and leaders touting AI agents, underscoring the need to rethink autoscaling and delivery flows. Salesforce put Heroku into sustaining engineering while prioritizing Agentforce [TechRadar](https://www.techradar.com/pro/salesforce-halts-development-of-new-features-for-heroku-cloud-ai-platform)[^1]; meanwhile, Databricks' CEO argues AI agents will render many SaaS apps irrelevant [WebProNews](https://www.webpronews.com/the-saas-sunset-why-databricks-ceo-believes-ai-agents-will-render-traditional-software-irrelevant/)[^2], echoing calls for agentic DevOps beyond classic CI/CD [HackerNoon](https://hackernoon.com/the-end-of-cicd-pipelines-the-dawn-of-agentic-devops?source=rss)[^3]. A real-world ECS/Grafana case study shows AI-heavy, I/O‑bound stacks can miss CPU-based autoscaling triggers, requiring new signals and tests [DEV](https://dev.to/shireen/understanding-aws-autoscaling-with-grafana-gl8)[^4]. [^1]: Confirms Salesforce halted new Heroku features and is prioritizing Agentforce. [^2]: Summarizes Databricks CEO’s thesis that AI agents will displace traditional SaaS. [^3]: Opinion piece advocating agentic DevOps supplanting conventional CI/CD pipelines. [^4]: Demonstrates ECS autoscaling pitfalls for I/O‑bound, LLM-integrated workloads using Grafana and k6.

calendar_today 2026-02-10
salesforce heroku agentforce databricks amazon-web-services

LLM safety erosion: single-prompt fine-tuning and URL preview data leaks

Enterprise fine-tuning and common chat UI features can quickly undermine LLM safety and silently exfiltrate data, so treat agentic AI security as a lifecycle with zero‑trust controls and gated releases. Microsoft’s GRP‑Obliteration shows a single harmful prompt used with GRPO can collapse guardrails across several model families, reframing safety as an ongoing process rather than a one‑time alignment step [InfoWorld](https://www.infoworld.com/article/4130017/single-prompt-breaks-ai-safety-in-15-major-language-models-2.html)[^1] and is reinforced by a recap urging teams to add safety evaluations to CI/CD pipelines [TechRadar](https://www.techradar.com/pro/microsoft-researchers-crack-ai-guardrails-with-a-single-prompt)[^2]. Separately, researchers demonstrate that automatic URL previews can exfiltrate sensitive data via prompt‑injected links, and a practical release checklist outlines SDLC gates to verify value, trust, and safety before launching agents [WebProNews](https://www.webpronews.com/the-silent-leak-how-url-previews-in-llm-powered-tools-are-quietly-exfiltrating-sensitive-data/)[^3] [InfoWorld](https://www.infoworld.com/article/4105884/10-essential-release-criteria-for-launching-ai-agents.html)[^4]. [^1]: Adds: original reporting on Microsoft’s GRP‑Obliteration results and cross‑model safety degradation. [^2]: Adds: lifecycle framing and guidance to integrate safety evaluations into CI/CD. [^3]: Adds: concrete demonstration of URL‑preview data exfiltration via prompt injection (OpenClaw case study). [^4]: Adds: actionable release‑readiness checklist for AI agents (metrics, testing, governance).

calendar_today 2026-02-10
microsoft azure gpt-oss deepseek-r1-distill google

Agent-first SDLC: from pilots to production

Agent-first development is moving from hype to execution, and teams that redesign workflows, codebases, and governance around AI agents are starting to ship faster while hiring now expects AI fluency by default. OpenAI’s internal playbook outlines concrete practices like making an agent the tool of first resort, maintaining AGENTS.md, exposing internal tools via CLI/MCP, and writing fast tests to keep agents productive and safe ([OpenAI team thread recap](https://threadreaderapp.com/thread/2019566641491963946.htmladar guide](https://www.techradar.com/pro/how-to-take-ai-from-pilots-to-deliver-real-business-value)[^2]). Urgency is rising with accelerating model capability and massive 2026 AI capex, and leadership signals that AI literacy is now table stakes for hiring ([Nate’s Substack](https://natesnewsletter.substack.com/p/the-two-career-collapses-happening)[^3]; [Cisco CEO remarks](https://www.webpronews.com/chuck-robbins-blunt-career-playbook-why-ciscos-ceo-says-the-rules-of-getting-hired-have-fundamentally-changed/)[^4]). [^1]: Practical blueprint for agent-first workflows (agents captain, AGENTS.md, skills, tool access via CLI/MCP, fast tests, quality bar). [^2]: Execution framework to scale beyond pilots with governance, integration, and business alignment. [^3]: Context on accelerating AI capability and investment signaling near-term impact pressure. [^4]: Market signal that AI fluency is expected across roles, not just engineering.

calendar_today 2026-02-09
openai codex camunda cisco epoch-ai

OpenAI ships GPT-5.3-Codex into IDEs, terminals, web, and a macOS app

OpenAI launched GPT-5.3-Codex, a faster coding model now embedded in IDEs, the terminal, web, and a macOS app, with early claims it assisted in building itself. OpenAI details ~25% faster runs, stronger SWE-Bench/Terminal-Bench results, and broad distribution via CLI, IDE extensions, web, and a new macOS app in the announcement [Introducing GPT‑5.3‑Codex](https://openai.com/index/introducing-gpt-5-3-codex/)[^1]. Coverage notes all paid ChatGPT plans can access it now, API access is coming, and the team used Codex to debug, manage deployment, and evaluate results during its own development [TechRadar report](https://www.techradar.com/pro/openai-unveils-gpt-5-3-codex-which-can-tackle-more-advanced-and-complex-coding-tasks)[^2], with additional workflow and positioning details on distribution and SDLC scope [AI News Hub](https://www.chatai.com/posts/openai-pushes-codex-deeper-into-developer-workflows-with-gpt-5-3-codex-release)[^3]. [^1]: Adds: Official feature, performance, and distribution overview. [^2]: Adds: Access paths (paid ChatGPT plans), benchmarks, and "built itself" context. [^3]: Adds: Deeper coverage of IDE/CLI/macOS integration, speedup figure, and API timing.

calendar_today 2026-02-07
openai gpt-53-codex chatgpt codex-macos-app gpt-5-3-codex

Sam Altman: Move Fast on AI Agents or Fall Behind

OpenAI CEO Sam Altman urged enterprises to rapidly adopt AI "workers" and agentic tooling, warning that organizations not set up for this shift will be at a major disadvantage and should expect some work and risk in rollout ([TechRadar coverage](https://www.techradar.com/pro/companies-that-are-not-set-up-to-quickly-adopt-ai-workers-will-be-at-a-huge-disadvantage-openai-sam-altman-warns-firms-not-to-fall-behind-on-ai-but-notes-its-going-to-take-a-lot-of-work-and-some-risk)[^1]). He highlighted accelerating model capability and agent patterns (e.g., tools with computer/browser access) as key to productivity gains and predicted substantial improvement in model quality by 2026. [^1]: Adds: Summary of Altman's remarks on enterprise AI adoption urgency, agentic automation potential (computer/browser access), and expected model improvements.

calendar_today 2026-02-04
openai chatgpt codex cisco ai-agents

2026 priority for backend/data teams: safe-by-design AI

AI experts urge a shift to "safe by design" systems by 2026, emphasizing built‑in guardrails, monitoring, and accountability across the stack—translate this into evals, auditability, and data provenance for your services ([TechRadar](https://www.techradar.com/ai-platforms-assistants/its-time-to-demand-ai-that-is-safe-by-design-what-ai-experts-think-will-matter-most-in-2026)[^1]). A candid counterpoint argues AI isn't taking jobs so much as our illusions about rote work, underscoring the need to refocus teams on higher‑value, safety‑critical engineering and governance ([Dev.to](https://dev.to/igbominadeveloper/ai-isnt-take-our-jobs-its-taking-our-illusions-138j)[^2]). [^1]: Adds: Expert consensus and timeline framing for "safe by design" AI as the core priority for 2026. [^2]: Adds: Reframing of workforce impact, motivating investment in safety, evaluation, and governance over rote coding.

calendar_today 2026-02-03
llms data-pipelines ai-safety ai-governance

Plan for multi-model agents and resilience in 2026

AI agents are set to pressure reliability, with more outages expected and a push toward chaos engineering and multi-cloud failover, per [TechRadar’s 2026 outlook](https://www.techradar.com/pro/the-year-of-the-ai-agents-more-outages-heres-what-lies-ahead-for-it-teams-in-2026)[^1]. In parallel, a [community thread on using Google Gemini with the OpenAI Agents SDK](https://community.openai.com/t/using-gemini-with-openai-agents-sdk/1307262#post_8)[^2] highlights growing demand for multi-model agent stacks—so design provider abstractions, circuit breakers, and fallback paths now.

calendar_today 2026-02-03
gemini openai-agents-sdk openai google techradar