terminal
howtonotcode.com
CORE logo

CORE

Platform

CORE aggregates open access research outputs for academics and researchers.

article 10 storys calendar_today First seen: 2026-02-03 update Last seen: 2026-03-03 open_in_new Website menu_book Wikipedia

Resources

Links to check for updates: homepage, feed, or git repo.

home Homepage

code Git repo

Stories

Showing 1-10 of 10

Agentic AI hits production in enterprise workflows

Agentic AI is moving from pilots to production across enterprise workflows, forcing teams to harden data governance, safety controls, and observability. A joint analysis highlights five converging forces shaping the 2026 enterprise—agentic AI, workforce reconfiguration, platform consolidation, data governance, and industry-specific apps—and argues the next 12–18 months are decisive for enterprise-wide integration, not incremental pilots ([Deloitte and ServiceNow](https://www.webpronews.com/the-ai-fueled-enterprise-of-2026-deloitte-and-servicenow-map-the-five-forces-reshaping-corporate-technology-strategy/)). Microsoft is pushing this shift in core business systems as Dynamics 365 moves beyond passive copilots toward autonomous agents that monitor conditions, plan, and execute multi-step workflows across ERP/CRM, raising immediate questions around approvals, rollback, and auditability ([Dynamics 365 agentic AI](https://www.webpronews.com/agentic-ai-comes-to-microsoft-dynamics-365-what-enterprise-software-teams-need-to-know-right-now/)). Broader market signals point to proactive AI—systems that anticipate needs based on long-term memory—becoming normal, exemplified by ChatGPT’s proactive research and Meta’s work on follow-up messaging, which will boost productivity but also amplify trust, bias, and privacy frictions ([TechRadar outlook](https://www.techradar.com/pro/2025-was-the-year-ai-grew-up-how-will-ai-evolve-in-2026)).

calendar_today 2026-03-03
microsoft-dynamics-365 servicenow deloitte microsoft openai

AI is collapsing the storage–compute split and rewiring databases

AI workloads are forcing teams to reduce data movement, bring compute closer to data, and adopt databases that handle agent-scale access patterns and vectors by default. AI pipelines repeatedly touch unstructured data and embeddings, making the classic storage–compute separation a cost center; with data prep consuming up to 80% of effort and 93% of GPUs sitting idle from I/O waits, [InfoWorld](https://www.infoworld.com/article/4138058/why-ai-requires-rethinking-the-storage-compute-divide.html) argues for “smart storage” and near-data processing. At the market layer, databases remain the load-bearing core with high switching costs, but AI agents change access patterns, intensifying the Databricks vs Snowflake platform race, per this [Business Engineer analysis](https://businessengineer.ai/p/databricks-snowflake-and-the-ai-database). On the ground, the FrankenSQLite effort bundles vector search, geospatial, and other extensions into a single precompiled SQLite binary, signaling a shift toward lightweight, compute-local capabilities for server-side and AI use cases ([WebProNews](https://www.webpronews.com/frankensqlite-the-audacious-experiment-stitching-together-sqlite-extensions-into-a-single-monstrous-database-engine/)).

calendar_today 2026-03-03
databricks snowflake oracle ibm microsoft

LangChain Core 1.2.14 stabilizes tool-call merges, preserves metadata, and tightens deserialization guidance

LangChain Core 1.2.14 delivers targeted fixes and docs updates to stabilize parallel tool calls, preserve merge metadata, clarify LangSmith tracing params, and harden deserialization practices. The release fixes incorrect list merging for parallel tool calls, preserves index and timestamp fields during merges, and prevents a recursion error when args_schema is a dict—improving reliability for agent orchestration and data flows; see details in the [1.2.14 notes](https://github.com/langchain-ai/langchain/releases/tag/langchain-core%3D%3D1.2.14). It also clarifies LangSmith tracing behavior around integer temperature values, adds security warnings and best practices for deserialization, corrects a misleading Jinja2 sandboxing comment, and updates sys info reporting (removing LangServe, adding DeepAgents), with dependency bumps and minor doc fixes captured in the [changelog](https://github.com/langchain-ai/langchain/releases/tag/langchain-core%3D%3D1.2.14).

calendar_today 2026-02-20
langchain langsmith deepagents langserve python

Production RAG playbook + LangChain 1.2.10 safeguards

Building production RAG got easier this week with a practical map of nine retrieval patterns and LangChain 1.2.10 fixes for token counting and context overflow. [9 RAG architectures](https://atalupadhyay.wordpress.com/2026/02/10/9-rag-architectures-every-ai-developer-must-know/)[^1] and a [prompt caching deep dive](https://atalupadhyay.wordpress.com/2026/02/10/prompt-caching-from-zero-to-production-ready-llm-optimization/)[^2] provide runnable labs and concrete optimization tactics. The [LangChain 1.2.10](https://github.com/langchain-ai/langchain/releases/tag/langchain%3D%3D1.2.10)[^3] and [langchain-core 1.2.10](https://github.com/langchain-ai/langchain/releases/tag/langchain-core%3D%3D1.2.10)[^4] releases add a token-counting fix and a new ContextOverflowError to harden pipelines. [^1]: Adds: Maps nine RAG patterns (Standard, Conversational, CRAG, Adaptive, Self-RAG, Fusion, HyDE, Agentic, GraphRAG) with diagrams and Python/LangChain labs (ChromaDB, optional Neo4j). [^2]: Adds: End-to-end prompt caching guide with provider-specific notes, labs (single/multi-turn, RAG), and production best practices. [^3]: Adds: Release notes including a fix for token counting on partial message sequences and internal provider rename. [^4]: Adds: Release notes adding ContextOverflowError (raised for OpenAI/Anthropic), improved approximate token counting, and minor docs/features.

calendar_today 2026-02-10
langchain openai anthropic chromadb neo4j

Agentic development lands in Xcode, GitHub Actions, and Google APIs

Agentic development is moving from proofs to practice across core tooling, with Xcode 26.3 adding in-IDE agents and MCP, GitHub piloting agentic workflows in Actions with guardrails, and Google introducing APIs that make assistants stateful and documentation-accurate. Apple’s latest Xcode adds deeper agent capabilities and first-class MCP integration, enabling Claude/Codex-style agents to plan, run builds/tests, and verify via Previews within the IDE [InfoQ](https://www.infoq.com/news/2026/02/xcode-26-3-agentic-coding/)[^1]. GitHub Next’s experimental Agentic Workflows bring locked-down, event-driven agents to CI using a CLI that compiles natural language into read-only, sandboxed Actions [Amplifi Labs](https://www.amplifilabs.com/post/css-scope-hits-baseline-github-agentic-workflows-oss-trust-tools)[^2]; meanwhile, Google’s Developer Knowledge API with an MCP server and the new Interactions API push assistants toward on-demand, canonical retrieval and managed, stateful steps for deep research [DevOps.com](https://devops.com/google-launches-developer-knowledge-api-to-give-ai-tools-access-to-official-documentation/)[^3] [Towards Data Science](https://towardsdatascience.com/the-death-of-the-everything-prompt-googles-move-toward-structured-ai/)[^4]. [^1]: Adds: release details on agent behaviors, MCP via mcpbridge, and verification in Xcode 26.3. [^2]: Adds: overview of GitHub Agentic Workflows model, guardrails, and repo automation scenarios. [^3]: Adds: specifics on the Developer Knowledge API, freshness guarantees, and MCP server integration. [^4]: Adds: explanation of Google’s Interactions API for stateful, tool-orchestrated agent flows.

calendar_today 2026-02-09
xcode anthropic claude-agent claude-code openai

OpenAI’s GPT-5.3-Codex rolls out to Copilot with faster, agentic workflows

OpenAI's GPT-5.3-Codex is a 25% faster, more agentic coding model built for long-running, tool-driven workflows and is now rolling out across Codex surfaces and GitHub Copilot with stronger cybersecurity guardrails. OpenAI positions the model for multi-step coding and broader "computer use" with SOTA benchmark results and notes early versions helped build and operate itself [Pulse 2.0](https://pulse2.com/openai-reveals-gpt-5-3-codex-a-faster-agentic-coding-model-built-for-long-running-work/)[^1] and [AI-360](https://www.ai-360.online/openai-launches-gpt-5-3-codex-extending-agentic-coding-and-real-time-steering/)[^2]. GitHub confirms GPT-5.3-Codex is GA in Copilot (Pro/Business/Enterprise) across VS Code, web, mobile, CLI, and the Coding Agent with an admin-enabled policy toggle and gradual rollout [GitHub Changelog](https://github.blog/changelog/2026-02-09-gpt-5-3-codex-is-now-generally-available-for-github-copilot/)[^3], while OpenAI channels have it now with API access "soon" and a new Trusted Access for Cyber pilot [Pulse 2.0](https://pulse2.com/openai-reveals-gpt-5-3-codex-a-faster-agentic-coding-model-built-for-long-running-work/)[^1] and [ITP.net](https://www.itp.net/ai-automation/openai-launches-gpt-5-3-codex-the-new-era-of-ai-powered-coding-and-beyond)[^4]. [^1]: Adds: Core capabilities, benchmark highlights, safety posture, availability across Codex app/CLI/IDE/web, and NVIDIA GB200 NVL72 infra. [^2]: Adds: Real-time steering in extended runs and cybersecurity classification/pilot context for enterprise adoption. [^3]: Adds: Concrete Copilot GA details, supported surfaces, plans, rollout, and admin policy enablement. [^4]: Adds: Additional context on broader professional task coverage and API timing.

calendar_today 2026-02-09
openai gpt-53-codex openai-codex-app github github-copilot

LLM-to-Docker in Local Dev: Use a Broker Pattern

A community question on letting OpenAI Codex control a local Docker environment highlights the need to mediate LLM-driven container actions through a safe, auditable broker instead of direct access. An OpenAI Community thread asks how to enable Codex-to-Docker connectivity in a local setup and surfaces the integration challenge for teams experimenting with LLM-guided container workflows [How to allow Codex connection to Docker in local environment?](https://community.openai.com/t/how-to-allow-codex-connection-to-docker-in-local-environment/1373567#post_1)[^1]. [^1]: Adds: Shows real-world demand and the core question teams face when wiring LLM suggestions to local Docker actions.

calendar_today 2026-02-07
openai codex docker containers security

Operationalizing MAESTRO for Agentic AI Threat Modeling in CI/CD

This piece shows how to take the MAESTRO agentic-AI threat model from theory to practice by integrating automated classification (TITO) into CI/CD to continuously flag LLM-driven tool actions, dynamic trust-boundary crossings, and prompt-injection chains in real codebases ([Applying MAESTRO to Real-World Agentic AI Threat Models](https://kenhuangus.substack.com/p/applying-maestro-to-real-world-agentic)[^1]). The core message: SAST alone misses agent behavior; you need runtime-aware threat modeling that treats prompts as untrusted code and audits every tool invocation end-to-end. [^1]: Adds: Walkthrough of wiring MAESTRO into an automated tool (TITO), examples of findings on agentic codebases, and guidance to embed threat modeling into CI/CD.

calendar_today 2026-02-04
maestro tito-threat-in-and-threat-out agentic-ai llm-agents threat-modeling

2026 priority for backend/data teams: safe-by-design AI

AI experts urge a shift to "safe by design" systems by 2026, emphasizing built‑in guardrails, monitoring, and accountability across the stack—translate this into evals, auditability, and data provenance for your services ([TechRadar](https://www.techradar.com/ai-platforms-assistants/its-time-to-demand-ai-that-is-safe-by-design-what-ai-experts-think-will-matter-most-in-2026)[^1]). A candid counterpoint argues AI isn't taking jobs so much as our illusions about rote work, underscoring the need to refocus teams on higher‑value, safety‑critical engineering and governance ([Dev.to](https://dev.to/igbominadeveloper/ai-isnt-take-our-jobs-its-taking-our-illusions-138j)[^2]). [^1]: Adds: Expert consensus and timeline framing for "safe by design" AI as the core priority for 2026. [^2]: Adds: Reframing of workforce impact, motivating investment in safety, evaluation, and governance over rote coding.

calendar_today 2026-02-03
llms data-pipelines ai-safety ai-governance

CORE: Persistent memory and actions for coding agents via MCP

CORE is an open-source, self-hostable memory agent that gives coding assistants persistent, contextual recall of preferences, decisions, directives, and goals, and can trigger actions across your stack via MCP and app integrations like Linear, GitHub, Slack, Gmail, and Google Sheets; see [CORE on GitHub](https://github.com/RedPlanetHQ/core)[^1]. For backend/data teams, this replaces brittle context-dumps with time- and intent-aware retrieval across Claude Code and Cursor, enabling consistent code reviews and automated updates tied to prior decisions. [^1]: Adds: repo, docs, and integration details (MCP, supported apps, memory model, self-hosting).

calendar_today 2026-02-03
core redplanethq claude-code cursor mcp