here’s the question I ask every CEO who tells me they want to use ai.
it’s not what tools should we buy. it’s not what use cases should we target first. it’s this: show me how information moves through your business today.
most of them can’t.
they can tell me what software they’ve bought, which vendors are pitching them, which dashboards they log into. but if I ask them to draw on a whiteboard how a customer’s data moves from the first form they fill out through to the last invoice they pay, they stall. and that’s before we’ve said the word “ai”.
this is where most ai projects die. not in the model. not in the tool. in the operation underneath.
why most ai advice misses the point
almost every piece of ai advice aimed at CEOs starts at the wrong layer. it starts with the tool. which model. which platform. which vendor. which use case.
a systems thinker starts one layer deeper. at the information flow. at the operation. at the system the ai will live inside of.
here’s what I’ve learned watching clients run projects. a great ai model plugged into a broken operation still gives you broken output — faster, and with higher confidence. that’s worse than no ai at all. the wrong tool slows you down. the right tool dropped on top of a broken operation makes your mistakes bigger, quicker, and with a thin layer of authority on top.
if the operation doesn’t hold up to daylight, the ai won’t save you.
we’ve been here before
this isn’t the first time an entire industry bought the wrong layer.
in 2006, enterprises spent billions on SOA platforms. the technology worked. most of the deployments died anyway. companies bought a platform, skipped the operational discipline underneath it, and ended up with software that was just more expensive to run.
in 2012, cloud did the same thing. we all agreed the technology was real. most of the early migrations cost more than the on-prem they replaced, because nobody rebuilt the operations around the new reality. they lifted and shifted.
in 2017, microservices did the same thing. the technology worked. the teams that skipped the observability, governance, and rollout discipline ended up with a distributed monolith wearing microservices clothing.
every wave, the same pattern. the technology worked. the adoption failed. companies skipped the operations layer and hoped the tool would do the work the system was supposed to do.
ai in 2026 is where software was in each of those moments. we know the technology works. the question is whether you’re going to skip the same step again.
one data point
Gartner published research in april 2026: only 28 percent of enterprise ai use cases in infrastructure and operations fully meet their ROI expectations. the survey covered 782 infrastructure and operations leaders.
most of the analysis you’ll see on that number focuses on the 72 percent that failed. that’s the wrong focus. the interesting question is what the 28 percent did differently.
in my experience — and I’ve spent enough time inside these projects to have a real answer — the 28 percent made four moves. every one of them. and the 72 percent made one or two and hoped the rest would sort itself out.
the four moves
here are the four. I walk every client through all of them, in order, every time.
diagnose
before you do anything else, map the operation you’re trying to change. not the one in the pitch deck. the real one. the one with the handoffs that nobody wrote down, the spreadsheet emailed between three people, the report built from a view that’s out of date.
diagnose is about seeing the system clearly. map the ground truth first. fixes come later.
when you see the system clearly, you usually find two or three things that would have killed the ai project before it started — a data source nobody owns, a process that only works because one person in accounting knows a trick, a handoff that looks automated but is actually manual. those are the things the ai can’t save you from. those are the things you fix first.
build
once you’ve diagnosed the operation, build the smallest version of the system that actually works end to end. not the demo. the real thing. inspectable, owned, and running on real data, in small sprints.
most teams build the coolest version they can demo. that’s a mistake. build the smallest version that works in real conditions, put it in front of the people who will actually use it, and let the feedback shape the next sprint.
every engagement I’ve run has had a “first real version” that looked embarrassing compared to the mockup. every one of them survived the next year. the fancy demos didn’t.
operate
this is the move most companies skip. and this is the move that kills them.
build is where a project ends for most consulting engagements. operate is where a system begins. operating means somebody is watching it — actually watching it — every day. somebody knows within two clicks whether the system ran last night, whether the outputs are drifting, whether the inputs changed, whether the edge cases are multiplying.
a system nobody’s watching is already broken. it just hasn’t broken publicly yet. by the time it does, you’ve lost three months of trust with your team and six months of momentum with your executives.
when I look at a failed ai project — and I’ve looked at a lot of them — the most common thing I find is that nobody owned the operation after handoff. the consultants left. the in-house team was too busy. the dashboard stopped working in month three and nobody noticed until month five.
if you can’t tell me who is watching the system, it’s running on borrowed time.
optimize
month 6 has to beat month 1. if it doesn’t, you built an automation, not a system.
this is where most advice falls silent. vendor demos don’t show month 6. case studies almost never include the quarterly review numbers. everyone wants to talk about the launch. nobody wants to talk about the compounding.
but compounding is the whole point. the clients who win with ai are the ones where every quarter the system is smarter than the last, because they kept feeding it the operational signal it needs to improve. that signal comes from operate — the watching, the logging, the edge cases. optimize is where you feed that signal back into the design.
the first version is a guess. the second version is a correction. the third version is where value starts to compound. the tenth version is unrecognizable from where you started, and it’s generating real leverage for the business.
how to start
if you’re at the beginning, start with diagnose. don’t let anyone sell you a tool until you’ve spent a week mapping the operation.
if you’re stuck — the project is live but the value isn’t showing up — you skipped operate. go back and fix the watching layer before you touch anything else.
if your ai project already died, it wasn’t ever a system. the next one has to make all four moves.
the whole lesson in one paragraph
the companies that get ai right aren’t smarter. they make all four moves. diagnose, build, operate, optimize. if any one is missing, the system drifts and the automation breaks. the ones that win don’t skip steps. the ones that lose always tell me the same thing at the end: we built it. we didn’t operate it.