The rush to “AI everything” recently created a disconnect between the people building systems and the people meant to use them. At the same time, far too many so-called innovations are simply marketing theatre or layers of unnecessary complexity wrapped in buzzwords.
When everyone wants to automate and optimise everything, everywhere, all at once, the result is confusing, wasteful, and of no value to the enterprise. When 95 percent of AI projects fail to deliver financial impact, as a recent MIT study shows, that is not just a sign of growing pains, it’s a sign of strategic blindness.
We’ve confused speed with direction.
The irony is that the industry isn’t wrong about AI’s potential. It will reshape the enterprise. But not in its current form. We need a new, human-centric approach to AI, where technology is built with purpose and an understanding of where it adds value.
AI is a tool, not an oracle.
You still need people who understand what it’s doing, why it’s doing it, and what happens when it goes wrong. If a misplaced model run wipes your GitHub repo, the model won’t save you. Humans will.
Too much AI?
Too often, AI is layered on top of existing complexity, instead of simplifying it. It’s like the battle of the standards all over again, but this time, it’s super sized. Simply put, in platform engineering, this is dangerous.
Teams end up managing tools to manage other tools, while the human element which includes things like understanding workflows, communication, and, most importantly, decision-making, gets ignored. The industry keeps shouting that “AI will be the new UI,” but that only makes sense if you’ve never worked inside a real enterprise.
You don’t fix complexity by piling more complexity on top.
A sober approach to AI is required. An approach that recognises real progress coming from designing systems that make people more capable, not less relevant. It doesn’t reject ambition, it rejects noise, and it starts with clear problems and works to find the simplest solution.
Retrieval Augmented Generation (RAG), for instance, links AI outputs directly to verified company sources, cutting hallucinations and wasted compute. Model Context Protocol (MCP) allows users to interact with infrastructure through natural language while remaining fully within guardrails. These are examples of AI that respects its boundaries and serves its purpose.
By grounding AI outputs in verified company data, these approaches give developers accurate, auditable answers instead of unsubstantiated ones. It makes AI lighter and more efficient, because it only uses the information it needs. Developers can interact directly with their infrastructure, create stacks, check configurations, or retrieve platform information without leaving compliance behind. This is not AI replacing the platform; it is AI operating inside the platform.
Anyone explaining Internal Developer Platforms are about to vanish under a wave of generative magic is selling fiction, not engineering.
At the centre of this new wave of AI integration is the basic idea that people should remain in control. The benefit here is twofold, as processes become smoother and more efficient, and the people behind them gain the clarity to focus on solving real problems rather than fighting their tools.
Measurable outcomes
The strength of any platform lies in its ability to reflect the context and intent of the teams using it. AI, no matter how advanced, cannot yet grasp the organisational nuance behind a policy change, a deployment freeze, or a compliance audit. So, when people guide the system, AI becomes a collaborator that enhances teams’ intuition and judgement rather than removing their critical thinking entirely.
This is how trust is built in enterprise technology, not by removing people, but by giving them visibility and control.
But for this human-centred model to work, measurement must evolve. The industry is still obsessed with speed and automation metrics, chasing meaningless numbers about lines of code or time to deploy. What matters is whether these tools improve decision-making and create systems that are easier to maintain.
Measurable outcomes relate to sustainability, security, and satisfaction, not vanity KPIs.
If AI cannot demonstrate tangible improvements in these areas, it’s not progress, it’s distraction disguised as innovation and, sooner or later, someone pays for that distraction, whether in cost overruns, outages or failed ventures. And then there is the issue of waste.
The promise of AI often masks its appetite for resources, both human and environmental. Every unnecessary prompt, every redundant model run, every layer of complexity burns energy and attention. Sober AI means resisting that temptation. It means building systems that do less but do it well. Systems that perform defined tasks within clear limits.
Efficiency is about restraint.
Enterprises that learn to value that restraint will be the ones who truly benefit from AI, while the rest chase hype into a wall of technical debt and wasted compute. My bet is that in the next 12 to 18 months we’ll see a much more bearish AI market, simply because the industry has been spending like a slot machine.
Silicon Valley optimism won’t protect you when investors start asking where the profit is, so it's wise to consider measurable outcomes and ROI now before making any large-scale AI purchasing decision.
Please deploy responsibly
The future of AI in enterprise technology is not about spectacle or chasing the latest model. It’s about building tools that respect the human context, enhance judgment, and integrate into workflows without added noise (or tool management).
Sober AI doesn’t promise to replace people, it empowers them. It prioritises measurable outcomes over vanity metrics and organisations that embrace this approach will not just survive the AI wave, they will lead it. The rest will discover the hard way that hype cycles always end, but good engineering doesn’t.