Every month in AI feels like a year. March 2026 has been no exception — there’s been genuine progress, some overhyped noise, and a few shifts that I think actually matter for everyday people and businesses. Let me cut through the press releases and give you my honest read on where things stand.
The Big Picture: From Hype to Monetisation
The defining story of AI in early 2026 is the shift from growth at all costs to show us the money. The major AI labs — OpenAI, Anthropic, Google DeepMind — have all moved their focus toward building revenue, not just capability benchmarks. That’s actually a healthy sign for the industry.
What it means practically is that we’re seeing more enterprise features, more API pricing tiers, and more focus on reliability and production use cases over raw capability demos. The era of “look at this incredible thing the model can do” is giving way to “here’s how this makes your business more money.”
Multimodal AI Is Becoming the Norm
A year ago, text-in-text-out was still the dominant paradigm. In early 2026, multimodal capability — feeding images, audio, video, and documents into AI models alongside text — has become standard across the major platforms.
This matters because it makes AI genuinely more useful for real-world tasks. You can photograph a document and ask questions about it. You can share a screenshot of an error message and get a diagnosis. You can describe a visual problem and get a solution. The friction between “I have a problem” and “the AI can help” has dropped significantly.
AI Agents Are Real Now — But Still Messy
The concept of AI agents — models that can take sequences of actions, use tools, browse the web, execute code, and complete multi-step tasks with minimal human intervention — has gone from research demo to production reality over the past year.
Tools like OpenClaw, Anthropic’s Claude agent frameworks, and various commercial platforms now make it genuinely feasible for non-developers to run AI agents that handle real tasks: managing emails, booking appointments, writing and posting content, monitoring data, and more.
That said, agents are still messy. They make mistakes. They sometimes take unexpected actions. The human-in-the-loop is still essential for anything important. The gap between “impressive demo” and “reliable production system” is real and shouldn’t be underestimated.
My experience running AI agents in my own work: they’re incredibly useful for well-defined, repeatable tasks. They’re much less reliable for anything requiring genuine judgement, nuance, or novel problem-solving. Set your expectations accordingly and you’ll get a lot out of them.
Edge AI Is Quietly Getting Interesting
While most attention stays on the big cloud models, something genuinely interesting is happening at the edge. Smaller, more efficient models are being optimised to run on consumer hardware — laptops, phones, and even microcontrollers. Apple’s on-device AI features, Qualcomm’s NPU chips in Android flagships, and projects like Ollama for running local models on Mac and Windows have made on-device AI a real option.
Why does this matter? Because on-device AI means your data never leaves your machine. For privacy-conscious users — particularly in professional contexts where client confidentiality matters — this is significant. You get AI capability without the cloud dependency.
The models running locally are still a generation behind the cloud frontier, but the gap is closing faster than most people expect.
What Hasn’t Changed
For all the progress, a few fundamental limitations remain stubbornly present:
- Hallucinations haven’t gone away. AI models still confidently generate false information. Verification remains essential for anything factual.
- Context windows are larger but not infinite. Long documents and extended conversations still challenge even the best models.
- Reasoning is better but not reliable. The latest models handle complex multi-step reasoning much better than they used to — but they still fail in ways that feel surprising and random.
- Cost is still a barrier for serious use. Running AI agents at scale is not cheap. The economics of AI automation still don’t work for many small business use cases.
My Honest Take
AI in 2026 is genuinely transformative for people who engage with it thoughtfully. It’s also still a tool that requires skill, judgement, and appropriate scepticism to use well.
The people getting the most out of AI right now aren’t the ones chasing every new model release. They’re the ones who have identified specific, repeatable tasks where AI reliably saves them time — and built systems around those use cases. That’s where the actual value lives.
If you’re not sure where to start, I’m always happy to have a conversation about what makes sense for your specific situation. That’s kind of what I do.
— Chris
