Strategy

You Don't Need to Understand AI. You Need Better Decisions.

GM

Greg Murphy

March 13, 2026 • 7 min read

Let me be honest with you about something.

After 30 years in higher education — including 11 years helping shape KAUST's recruitment and enrollment strategy — I know what good decisions, made consistently by good people, look like over time. The Times Higher Education Arab University Rankings recognized that work three consecutive years running, culminating in the #1 spot. Good decisions compound. That's true in institutions…and it's true in AI.

But these days, I see something that worries me more than the old way of doing things. Organizations are pouring money into AI, building dashboards, running pilots, hiring data teams, but continue to make the same mediocre decisions they have always made. Just faster. And with fancier graphics.

I believe we need to focus on a more appropriate question: What decision are you actually trying to make better?

The Conversation That Put Words to What I Was Feeling

Meeting Matt Kesby, author of Untangling AI, and getting inside his ideas, methodology and frameworks firsthand was genuinely transformational. In a space drowning in noise and hype, Matt makes the complex feel clear, and the intimidating feel possible.

What struck me most about Matt's thinking is what he gets right (and what most AI conversations miss entirely): the technology isn't actually the point.

And I mean that more literally than it sounds. Currently, AI conversations are dominated by questions like "which LLM is best?" "Should we be on Claude or ChatGPT?" "Do we need our own infrastructure or can we use the cloud?" "How do we harness these tools together most efficiently?"

Don't get me wrong — those aren't unimportant questions. But they are the wrong first questions. And asking them first is exactly how organizations end up spending a year on AI strategy and still not making a single better decision.

It's a bit like hiring an architect and spending six months debating which brand of hammer they should use — before you've decided what you're building or who's going to live in it.

Matt's frameworks push you upstream — away from the noise and back to the fundamentals: What problem are you solving? Who owns the outcome? What does a better decision actually look like in practice — not in theory, not in a pilot, but on a real mid-week afternoon when something has to happen?

That's where the value lives. Not in the model. In the moment of decision.

And once you've got clarity there, the technology question becomes almost obvious. The right tool reveals itself when you know what job it needs to do. Without that clarity, you're just collecting hammers.

The Gap Nobody Talks About

One statistic from a recent industry report jumped off the page at me: Nearly 74% of all companies struggle to achieve and scale value from their AI initiatives.

Seventy-four percent. That's not a technology failure. That's a strategy failure.

The usual culprits are often fragmented data, weak decision-making frameworks, and cultural resistance.

I suggest another — nobody stopped to ask what decision they were trying to improve in the first place.

In enrollment management, I learned early that data only has value if it's clear, consistent, and useful at the moment of decision. The most sophisticated predictive model in the world is worthless if it sits unused while real decisions get made on gut instinct and assumptions.

AI is no different.

What "Decision Intelligence" Actually Means in Plain Language

Forget the jargon. Decision intelligence is really just asking: Are the right people getting the right information at the right time to make better choices?

That's it.

The technology (AI agents, machine learning models, the platforms that support them) — these are the plumbing. The point is the decision at the end. The offer letter to the student. The risk flag. The resource allocation.

The biggest unlock isn't a new tool. It's a new question. Instead of "how do we use AI?" ask "what decision are we making, who owns it, and what information would make it better?"

Once you've answered that, the technology part gets a lot simpler.

The Human in the Loop Still Matters. A Lot.

Matt is clear about this — and it's something I see consistently in my own work. Scaling AI decisions doesn't mean removing humans from the process. It means giving humans better support so their judgment is better informed.

The professionals who get the most out of AI aren't the ones who hand decisions over to the machine. They're the ones who use AI to stress-test their instincts, surface what they might have missed, and move faster with more confidence.

That doesn't require a computer science degree. It requires curiosity, better questions, and a bit of humility about what we don't know.

The Bottom Line

AI is changing how decisions get made. Organizations that figure this out will have a real advantage over those still debating whether to start.

But the competitive edge isn't in the algorithm. It's in the clarity.

Know what decision you're trying to make. Know who owns it. Know what "better" looks like. Then find the AI that supports that — not the other way around.

That's work worth doing.

GM

Greg Murphy

AI Coach & Strategist — Transforming Operations & Driving Growth. With 25+ years building operational systems across sectors, Greg brings real-world change management expertise to AI adoption. No hype. No fear. Just practical strategies that make AI work for your people and bottom line. Based in Abu Dhabi, UAE.

Connect with Greg at AICoaches.com