terminal
howtonotcode.com
Google Gemini logo

Google Gemini

Ai Tool

Gemini is a generative AI chatbot and virtual assistant by Google.

article 5 storys calendar_today First seen: 2026-02-11 update Last seen: 2026-03-03 open_in_new Website menu_book Wikipedia

Resources

Links to check for updates: homepage, feed, or git repo.

home Homepage

Stories

Showing 1-5 of 5

Google’s Gemini 3.1 Flash-Lite targets high-volume, low-latency workloads

Google released Gemini 3.1 Flash-Lite, a faster, cheaper model aimed at high-volume developer workloads and signaling a broader shift to lighter LLMs for routine backend and data tasks. Google’s launch of [Gemini 3.1 Flash-Lite](https://thenewstack.io/google-gemini-3-1-flash-lite/) emphasizes low-latency responses for tasks where cost is critical, with preview access via the Gemini API in Google AI Studio and enterprise access in Vertex AI, alongside industry moves like OpenAI’s GPT-5.3 Instant toward lighter models ([context and availability](https://www.thedeepview.com/articles/openai-google-target-lighter-models)). Independent coverage pegs Flash-Lite at $0.25/million input tokens and $1.5/million output tokens—about one-eighth the price of Gemini 3.1 Pro—and notes support for four “thinking” levels to trade speed for reasoning when needed ([pricing and modes](https://simonwillison.net/2026/Mar/3/gemini-31-flash-lite/#atom-everything)). For backend/data teams, this sweet spot makes Flash-Lite a strong default for translation, content moderation, summarization, and structured generation (dashboards/simulations), reserving heavier models for only the hardest requests ([use cases](https://www.thedeepview.com/articles/openai-google-target-lighter-models)). If your pipelines push files, mind Gemini’s surface-specific limits across Apps (including NotebookLM notebooks), API, and enterprise tools—think up to 10 files per prompt, 100MB per file/ZIP with caveats, strict video caps, and code folder/GitHub repo constraints—so ingestion doesn’t silently truncate or fail ([file-handling constraints](https://www.datastudios.org/post/gemini-file-upload-support-explained-supported-formats-size-constraints-and-document-handling-acr)). Zooming out, the race to lighter models (OpenAI’s GPT-5.3 Instant and Alibaba’s Qwen Small Model Series) underscores a clear pattern: push routine throughput to cheaper, faster tiers and escalate to heavyweight reasoning only on ambiguity or failure ([trend snapshot](https://www.thedeepview.com/articles/openai-google-target-lighter-models)).

calendar_today 2026-03-03
google gemini-31-flash-lite gemini-api google-ai-studio vertex-ai

From vibe coding to agentic engineering: PEV, context, and evals that ship

Production teams are moving from vibe coding to agentic engineering that plans, executes, and verifies work with tight context and evals. A practical guide to agentic engineering argues for a Plan → Execute → Verify loop, with humans acting as architects and supervisors while agents plan, write, test, and ship; it cites real adoption signals like TELUS time-savings, Zapier-wide usage, and Stripe’s weekly PR throughput ([guide](https://www.nxcode.io/resources/news/agentic-engineering-complete-guide-vibe-coding-ai-agents-2026)). Context discipline is emerging as a make-or-break factor: a new study shows repo-level AGENTS.md/CLAUDE.md files can degrade agent performance, pushing teams toward slimmer, task-scoped context that’s validated in CI ([AGENTS.md breakdown](https://www.youtube.com/watch?v=miDg-3rSJlQ&t=75s&pp=ygURU1dFLWJlbmNoIHJlc3VsdHM%3D), [DevOps context engineering](https://devops.com/context-engineering-is-the-key-to-unlocking-ai-agents-in-devops-2/)). Architecturally, vibe coding is “already dead” at scale; production agents enforce planning, tests, PR gates, and continuous evals before code lands ([Stripe agent deep dive](https://www.youtube.com/watch?v=V5A1IU8VVp4&pp=ygUYQUkgY29kaW5nIGFnZW50IHdvcmtmbG93)). For hands-on operating patterns—self-checks, context management, and when to escalate to humans—see this practitioner’s playbook ([effective coding agents](https://hackernoon.com/how-to-use-ai-coding-agents-effectively?source=rss)).

calendar_today 2026-03-03
stripe zapier telus claude-code openai-codex

Ship an AI RFP-scoring pipeline with n8n + Gemini, and mind the file limits (vs ChatGPT)

You can automate RFP scoring and spreadsheet analysis with Gemini today using n8n, while planning around concrete file-format and size limits across Gemini and ChatGPT. An end-to-end n8n workflow shows how to accept vendor PDFs via a form webhook, fetch the RFP from Drive, extract text, merge both streams, call the Gemini API with a structured prompt to return JSON scores, and append results to Sheets—plus Drive auth scopes and download details like alt=media are covered in this guide ([n8n + Gemini RFP evaluation](https://dev.to/hackceleration/building-ai-powered-rfp-evaluation-with-n8n-and-google-gemini-pf5)). For data handling at scale, Gemini supports XLS/XLSX/CSV/TSV and Google Sheets; Gemini chat allows up to 10 files per prompt at 100 MB each, while the Files API permits up to 2 GB per file and 20 GB per project for 48 hours—useful for batch or programmatic flows ([Gemini spreadsheet upload and limits](https://www.datastudios.org/post/google-gemini-spreadsheet-uploading-excel-and-csv-support-data-analysis-capabilities-formula-hand)). If you compare providers, ChatGPT accepts many document and data types but caps file size at 512 MB (with spreadsheet practical limits around ~50 MB) and also enforces token and image-specific ceilings, which can influence provider selection for large artifacts ([ChatGPT file upload limits](https://www.datastudios.org/post/chatgpt-file-uploading-capabilities-supported-file-types-upload-size-limits-rules-and-document-r)).

calendar_today 2026-02-17
google-gemini n8n google-drive google-sheets google-files-api

Salesforce pauses Heroku as AI agents rise; adjust autoscaling and pipelines

Vendors are pivoting from traditional PaaS and CI/CD toward agentic platforms, with Salesforce halting new Heroku features and leaders touting AI agents, underscoring the need to rethink autoscaling and delivery flows. Salesforce put Heroku into sustaining engineering while prioritizing Agentforce [TechRadar](https://www.techradar.com/pro/salesforce-halts-development-of-new-features-for-heroku-cloud-ai-platform)[^1]; meanwhile, Databricks' CEO argues AI agents will render many SaaS apps irrelevant [WebProNews](https://www.webpronews.com/the-saas-sunset-why-databricks-ceo-believes-ai-agents-will-render-traditional-software-irrelevant/)[^2], echoing calls for agentic DevOps beyond classic CI/CD [HackerNoon](https://hackernoon.com/the-end-of-cicd-pipelines-the-dawn-of-agentic-devops?source=rss)[^3]. A real-world ECS/Grafana case study shows AI-heavy, I/O‑bound stacks can miss CPU-based autoscaling triggers, requiring new signals and tests [DEV](https://dev.to/shireen/understanding-aws-autoscaling-with-grafana-gl8)[^4]. [^1]: Confirms Salesforce halted new Heroku features and is prioritizing Agentforce. [^2]: Summarizes Databricks CEO’s thesis that AI agents will displace traditional SaaS. [^3]: Opinion piece advocating agentic DevOps supplanting conventional CI/CD pipelines. [^4]: Demonstrates ECS autoscaling pitfalls for I/O‑bound, LLM-integrated workloads using Grafana and k6.

calendar_today 2026-02-10
salesforce heroku agentforce databricks amazon-web-services