Samsung eyes on-device vibe coding; modular LoRA routing beats model merging offline
Samsung is exploring on-device 'vibe coding' for Galaxy phones, and new open-source work shows modular LoRA routing can beat model merging for offline, privacy-preserving AI. Samsung’s mobile R&D lead said the company is investigating natural-language app generation on Galaxy devices, pushing on-device AI beyond photo edits and search. The idea echoes desktop tools like Cursor, Replit, and GitHub Copilot, but targets consumers on phones, not just developers, with no timeline yet shared ([report](https://www.webpronews.com/samsung-is-exploring-vibe-coding-on-galaxy-phones-heres-what-that-actually-means/)). In parallel, a developer showed why weight-space model merging can fail and proposed a “Gossip Handshake” that shares small LoRA adapters over BLE and routes queries to the right expert. On Apple Silicon with Qwen2.5, this switching approach delivered 5.6x to 13x better scores than merging and keeps data local, with code open-sourced ([write-up](https://dev.to/tflux2011/why-merging-ai-models-fails-and-how-a-gossip-handshake-fixed-it-3gef), [repo](https://github.com/tflux2011/gossip-handshake)). Together, these point to an edge-first stack: on-device app generation paired with adapter libraries, device-to-device sync, and semantic routing. Backends will need clean APIs, policy guardrails, and offline-first telemetry to support it.