A useful test for an AI strategy document is to read it and ask: where, exactly, has the company taken a position? What does it stand to be wrong about?
A surprising number of AI strategies fail this test. They contain ambition ("become the leading AI-enabled provider of …"), they contain investment commitments ("$Xm over three years"), they contain governance structures ("AI council, monthly cadence, exec sponsor"). They do not contain a clear statement of what the company believes about the world that, if false, would invalidate the plan.
The three bets are usually the ones being elided. Every AI strategy has implicit positions on all three. Naming them is the difference between a strategy and a roadmap.
Bet 01The capability bet
What will the model capability frontier look like in eighteen months, and is your plan robust to it being wrong by a year in either direction?
Strategies that bet slow — assuming the next eighteen months look much like the last six — tend to commit heavily to today's model capabilities, build around their current limits, and treat reliability as the primary engineering challenge. They invest in retrieval, evals, and tooling that make today's models more dependable. If the bet is right, the investment pays back. If the capability frontier moves faster than expected, the strategy spends the back half of its eighteen months retrofitting work it could have skipped.
Strategies that bet fast — assuming significant capability jumps inside the planning horizon — commit to interface-level investments (the way users interact with the system) and stay loose on the underlying model. They build their products so the model can be swapped, upgraded, or expanded with minimal cost. If the bet is right, the strategy compounds capability gains automatically. If the capability frontier stalls, the strategy looks indulgent and the team has to backfill the reliability work later.
Most companies should be on the spectrum somewhere — but most strategies don't say where. The strategy that says "we are betting that summarisation and structured extraction will improve substantially in the next twelve months, and search-quality retrieval will not" is a strategy that can be checked, argued with, and updated. The strategy that says "we will leverage advances in AI capabilities" is not.
Bet 02The build / buy bet, sharpened
The build-or-buy question is older than AI. The reason it gets harder in this context is that "buy" now includes "use a foundation model behind someone else's API" and "build" now includes "operate an inference cluster". The categories have widened, and the strategic implications of each have changed.
A useful refinement: there are three layers in an AI-native system, and the build/buy question is different at each.
The model layer
Almost no company should be training its own foundation models. The economics don't work, the talent isn't available, and the rate of improvement at the frontier outpaces what any single company can match. Buy here, almost always.
The capability layer
This is the layer of prompts, evals, tools, retrieval systems, and orchestration that sits between your data and the model. Some of it can be bought (Copilot, vendor agents, packaged solutions). Some of it has to be built (anything specific to your business). The strategic question is which capabilities are distinctive enough that buying them gives you a generic version of something a competitor will also have. Build what is distinctive; buy what is generic.
The data and interface layer
This is unavoidably yours. Buy a vendor agent for your support workflow and you've outsourced a generic capability; build the same agent on top of your own data and your own customer interface and you have an asset that compounds. The companies that confuse the model layer with the data layer end up with a thin veneer over a vendor and no defensibility.
A strategy that takes a position on all three layers is one a CFO can fund and a CTO can build. A strategy that says "we will partner where appropriate and build where it makes sense" has avoided the question.
Bet 03The internal / external bet
AI can reshape what a company sells, or it can reshape how a company runs. Most companies have to pick which edge to lead with.
The companies that lead with external — AI features in the customer experience, AI-augmented products — tend to attract customer attention, generate revenue stories, and pull marketing energy. The risk is that the customer-facing AI is built on top of operations that haven't kept up. The customer experience is shiny; the underlying delivery is the same as it was a year ago; cost-to-serve doesn't shift; gross margins don't move. Eventually the customer notices that the shiny front-end is sitting on a slow back-end.
The companies that lead with internal — AI in operations, in delivery, in the way the team works — tend to be quieter in the market but compound faster. Their cost-to-serve drops. Their delivery speed improves. Their gross margins rise. The risk is that the market mistakes their lack of customer-facing AI for an absence of AI strategy, and competitors with more visible (if less substantive) deployments take customer mindshare.
Both can work. Neither works if you try to do both at once, in the same horizon, with the same budget. Companies that try produce a half-finished customer-facing layer and a half-finished operational layer; neither lands.
The strategic call is: which leads, and by how long. A pattern that works for mid-sized companies: lead with internal for two quarters, build the operational capability, then move external features on top of the proven operational layer. Lead with external when the customer-facing differentiation is acute and the competitive window is short. Most companies are not in that situation as often as their strategies imply.
Why naming the bets matters
Two reasons.
A named bet can be checked. Six months into the strategy, you can look at the capability bet and ask: was the frontier where we predicted? You can look at the build/buy positions and ask: did we build the right things? Did we leave the right things to the vendors? The strategy becomes a hypothesis the company can update. The unnamed version becomes an opinion that quietly slides as conditions change, with no learning recorded.
A named bet is a teaching device. Junior people on the team understand the strategy when they understand the bets. They can apply the strategy to their day-to-day decisions because the principle is concrete. A roadmap of initiatives does not generalise; a stated bet generalises into every new decision the team faces.
The simple test
Before you take an AI strategy to the board, ask whether the document answers three questions:
- What do we believe about model capabilities in eighteen months, and what would prove us wrong?
- Which of the three layers — model, capability, data/interface — are we building, and which are we buying?
- Are we leading with internal change or external change, and for how long before we switch?
If the document doesn't answer those three, it is a roadmap dressed in strategy language. Roadmaps can be useful, but they don't survive contact with a board that probes — and they don't help the team make the small decisions every week that compound into whether the strategy works.
The strategies that hold up under pressure are the ones that name what they're betting on. The ones that don't, drift.
If you're drafting (or re-drafting) an AI strategy and want a second set of eyes that has built the things the strategy will commit to, that's the room we like to be in. Start a conversation →