2026 started strong in AI: the big players are iterating less on “new models” and more on how to make them useful in the real world (agents, voice, search, hardware, etc.). Here are 4 key points for entrepreneurs, creators, and developers:
1. The battle is no longer just about models, it's about real users. Several analyses show that ChatGPT still dominates in weekly users (800–900M), but Gemini is growing very fast, already reaching ~35–40% of its web/mobile scale and gaining ground, especially on desktop. Practical translation: you can't ignore the Google ecosystem (Search + Workspace + Gemini 3) if you sell digital products or services.
2. Gemini 3 as the “default search engine”
Google is heavily promoting Gemini 3 Flash as the default engine for the AI experience in Search, providing quick answers and links to sources, while maintaining the classic search bar for verification. This will change how traffic reaches your websites, blogs, and funnels, so “AI-first” content (optimized for rich results) becomes key.
3. OpenAI and the push towards voice and audio
OpenAI is preparing audio-centric models and devices for 2026, with the goal of making voice the primary interface for using AI. Think about experiences where your users speak to your product instead of filling out forms or submitting tickets.
4. The focus shifts from training to inference
This week’s technical briefs highlight that the battle is shifting towards inference: smaller, faster, cheaper models that are easier to deploy on-premises or in alternative clouds. For you, this means more options for setting up your own agents and assistants without always depending on a single, expensive provider.
What would you be most interested in seeing us break down in detail: Gemini 3 in Search, audio/voice strategies, or how to leverage smaller models for your own agents?