LOCAL CURSOR-STYLE AI INSIDE ZED: EARLY ARCHITECTURE AND REPO
An experimental Zed IDE fork is adding local AI features—semantic code search, cross-file reasoning, and web browsing—backed by vector DB indexing and local mod...
An experimental Zed IDE fork is adding local AI features—semantic code search, cross-file reasoning, and web browsing—backed by vector DB indexing and local models (Ollama/llama.cpp or OpenAI-compatible APIs). The author seeks concrete guidance on AST-aware chunking, incremental re-indexing for multi-language repos, streaming results to the editor, sandboxed browsing with prompt-injection defenses, and model orchestration. The repo already exposes settings for vector DB, embedder provider, model, API keys, and an index toggle.
Offers a path to code-aware AI assistants that run locally for privacy-conscious teams.
Defines practical integration points (indexing, embeddings, orchestration) that mirror cloud copilots without vendor lock-in.
-
terminal
Compare AST-aware vs text chunking and incremental re-indexing accuracy/latency on multi-language repositories.
-
terminal
Evaluate local model performance and memory footprint on standard dev machines and test prompt-injection defenses for web+browse context.
Legacy codebase integration strategies...
- 01.
Start with read-only semantic search on a subset of services and exclude binaries/generated files to keep indexing manageable.
- 02.
Validate embedder/model coverage across your language mix and ensure LSP/formatter hooks do not regress editor responsiveness.
Fresh architecture paradigms...
- 01.
Define a pluggable contract for vector DB and embedders early, and standardize chunking/metadata schemas.
- 02.
Roll out in slices: enable 'explain code' and semantic search first, then introduce cross-file refactors and web context.