The email from the board usually sounds something like this: “We need to see an AI strategy at the next board meeting. Where are we on AI?”
And then the scramble begins. The CTO pulls together a deck. It has a slide about market trends (“AI is transforming every industry”). A slide about what competitors are doing (mostly speculation). A slide listing 15 potential use cases. A slide with vendor logos. A slide about risks. A slide about talent. Maybe a timeline that shows “Phase 1: Quick Wins” and “Phase 3: Transformation” with no explanation of how you get from one to the other.
The board nods. Questions are polite. “This looks comprehensive.” Nobody asks the hard questions because nobody in the room knows what questions to ask.
Six months later, the company has funded three AI pilots. Two are stalled. One produced a demo that impressed leadership but isn’t in production. The board asks for an update. A new deck is assembled.
We’ve seen this cycle at dozens of companies. The problem isn’t that leadership doesn’t care about AI. It’s that what passes for an “AI strategy” at most companies isn’t a strategy at all. It’s a wish list — a collection of interesting possibilities with no framework for making choices, no connection to business strategy, and no honest assessment of what it will take to execute.
What an AI Strategy Is Not
Before we talk about what a real AI strategy looks like, let’s clear out the things that masquerade as strategy.
It’s Not a Use Case List
“We could use AI for demand forecasting. And predictive maintenance. And customer service. And quality inspection. And document processing.” That’s a brainstorm, not a strategy. A list of things AI could theoretically do tells you nothing about which ones you should do, in what order, or why.
It’s Not a Technology Assessment
“We’ve evaluated Azure AI, AWS SageMaker, and Google Vertex. Here are the feature comparisons.” That’s vendor evaluation. It answers “what tools exist?” but not “what are we trying to build?” We’ve covered Azure AI Search vs. Elasticsearch — useful for evaluation, but evaluation isn’t strategy. Choosing a platform before you know what you’re building is like buying a drill before you know which wall needs the hole.
It’s Not a Pilot Plan
“We’ll run three pilots in Q1 and scale the winners in Q2.” This is a project plan, not a strategy. It doesn’t answer why these three pilots instead of thirty other options, what capabilities they build toward, or how they connect to the company’s competitive position.
It’s Not a Trend Summary
“AI is projected to add $15.7 trillion to the global economy by 2030.” Great. What does that mean for your specific company, in your specific industry, with your specific competitive dynamics? Nothing, until you connect the macro trend to your micro reality.
A strategy is a framework for making choices. If your “AI strategy” doesn’t help you say no to things, it’s not a strategy.
The 5 Questions Every AI Strategy Must Answer
A real AI strategy answers five questions that most companies never get to — because they’re busy answering easier, less important questions instead.
Question 1: Where Does AI Create Competitive Advantage vs. Commodity Improvement?
This is the most important strategic question, and it’s the one most companies skip entirely.
Some AI applications create genuine competitive advantage — they give you capabilities your competitors can’t easily replicate. Others create commodity improvements — they make you incrementally more efficient at things everyone in your industry will soon do equally well.
Commodity improvements are things like:
- Automating back-office document processing
- Using AI for basic customer service routing
- Implementing standard predictive maintenance on common equipment
- Deploying off-the-shelf AI tools for email, scheduling, and meeting summaries
These are worth doing — they reduce cost and improve efficiency. But they won’t differentiate you. Every competitor will have them within 2-3 years, and many are available as turnkey SaaS products.
Competitive advantages are things like:
- AI that leverages proprietary data your competitors don’t have
- Automation of processes unique to your business model
- AI capabilities embedded in your product or service that customers pay for
- Models trained on decades of your domain-specific operational data
Why this matters for strategy: Commodity improvements should be bought, not built. Use vendor tools, SaaS products, and standard implementations. Don’t invest custom development budget in things you can subscribe to.
Competitive advantages should be built and owned. These are where you invest in custom development, proprietary data assets, and internal capability. These are the AI investments that create durable value.
A real AI strategy categorizes every potential initiative on this axis and allocates budget accordingly.
Question 2: What Data Do We Need to Own?
In an AI-driven world, data is the competitive moat. Models are commoditizing — the algorithms and architectures are increasingly available to everyone. What differentiates your AI is the data it’s trained on and operates against.
Your AI strategy should explicitly identify:
- What proprietary data assets do we have that competitors don’t? Customer behavior data, operational data, domain-specific training data, historical performance data.
- What data should we be collecting that we’re not? This is your data acquisition strategy — the intentional effort to build data assets that will power future AI capabilities.
- What data do we need to govern more carefully because it’s strategic? Not all data needs the same level of investment. Your AI strategy should identify the data domains that matter most and ensure they’re getting appropriate attention.
- What data partnerships or acquisitions would strengthen our AI position? Sometimes the data you need exists outside your organization.
The companies that will dominate with AI in 2030 are the ones building strategic data assets today. Not buying AI tools — building the data those tools need to create value that competitors can’t replicate.
Question 3: What Capabilities Do We Build vs. Buy?
This is different from the vendor evaluation question. It’s not “which vendor do we use?” It’s “what do we need to be able to do ourselves?”
Build internally when:
- The capability is core to your competitive advantage
- The AI requires deep integration with proprietary systems and data
- You need the ability to iterate rapidly based on business feedback
- The long-term cost of vendor dependency exceeds the cost of building
Buy externally when:
- The capability is commodity (see Question 1)
- The AI is well-served by mature, standard tools
- The implementation is bounded and well-defined
- Speed to value is more important than customization
The common mistake: Companies try to build everything (too expensive, too slow) or buy everything (no differentiation, vendor lock-in). The strategy should be explicit about which capabilities sit in each bucket and why.
This question also drives your talent strategy. If you’re building AI capabilities internally, you need data engineers, ML engineers, and domain experts. If you’re mostly buying, you need integration specialists, vendor managers, and internal champions. Different strategies require different teams. We’ve written about the AI team structure — who to hire depends entirely on whether you’re building or buying. And the build vs. buy decision framework deserves its own analysis.
Question 4: What’s the 3-Year Investment Thesis?
An AI strategy needs a financial framework. Not a detailed budget — a thesis about how AI investments create value and how that value is captured.
The thesis should cover:
- Total investment envelope: What are we prepared to invest in AI over three years, across infrastructure, talent, projects, and vendor spend?
- Value creation model: How do AI investments translate to business value? Cost reduction? Revenue growth? Risk mitigation? New product capabilities? Be specific about the mechanisms.
- Payback expectations: What’s the expected payback period for different types of AI investments? Foundation-building (data platforms, integration layers) has a longer payback than application-building (specific use cases). Both are necessary. The strategy should set expectations for each.
- Portfolio balance: How much goes to foundational capabilities vs. applications? How much to competitive advantage vs. commodity improvement? How much to near-term value vs. longer-term bets?
The common mistake: Evaluating AI projects individually instead of as a portfolio. A data platform investment might have a 24-month payback on its own — but it enables 10 AI projects that each have 6-month paybacks. The portfolio math works; the individual project math doesn’t. Your investment thesis needs to capture these dependencies. Our guide on calculating AI ROI covers the financial framework in more detail.
Question 5: What Has to Be True for This to Work?
Every strategy has assumptions. The honest ones make those assumptions explicit and test them.
Common assumptions in AI strategies that need testing:
- “Our data is clean enough to train models on.” Test it. Do a data quality assessment on the specific data domains your strategy depends on. We’ve written about what an AI readiness assessment covers — this is the foundation.
- “We can hire the AI talent we need.” Reality check. Can you compete for ML engineers in your market? At your salary bands? In your location? If not, your build strategy needs adjustment.
- “Our technology infrastructure can support AI workloads.” Verify. Legacy infrastructure may need significant investment before it can support training and inference workloads.
- “Leadership will sustain investment through the foundation-building phase.” Be honest. Foundation work takes 12-18 months before it produces visible AI results. Will the board maintain patience and budget through that period? If not, your strategy needs to include quick wins that demonstrate momentum.
- “The organization will adopt AI tools.” Don’t assume. Change management is a critical capability. If your culture resists new tools, your strategy needs to address that directly.
Good Strategy vs. Bad Strategy: A Side-by-Side
Bad AI Strategy:
“We will leverage AI across the enterprise to drive efficiency and innovation. In Phase 1, we will deploy AI pilots in demand forecasting, predictive maintenance, and customer service. In Phase 2, we will scale successful pilots. In Phase 3, we will achieve AI-driven transformation.”
What’s wrong: No basis for choosing these three pilots over any others. No connection to competitive strategy. No capability-building logic. No honest assessment of prerequisites. “Scale successful pilots” isn’t a plan. “AI-driven transformation” isn’t a destination.
Good AI Strategy:
“Our competitive advantage in specialty manufacturing depends on our ability to deliver complex, custom orders faster than competitors. AI investments will focus on three capability areas that directly support this advantage: (1) automated quoting that reduces quote turnaround from 5 days to same-day, leveraging 15 years of historical job data that competitors don’t have; (2) intelligent scheduling that optimizes our shop floor for job-shop complexity; (3) predictive quality that catches defects earlier in the process, reducing rework and improving on-time delivery.
These capabilities require investment in data foundations first: standardized job history data, real-time shop floor data integration, and a unified quality data model. We’ll invest $200K in these foundations over Q1-Q2, then build the quoting automation in Q3-Q4 as our first application, targeting $400K in annual revenue impact from faster quote turnaround and higher win rates.
Commodity AI applications (email, document processing, meeting summaries) will be addressed through Microsoft 365 Copilot licenses, not custom development.”
What’s right: Connected to competitive strategy. Specific about where AI creates advantage vs. commodity. Honest about prerequisites. Sequenced logically. Financially grounded. Says no to things (commodity applications = buy, not build).
The Strategy Development Process
If you’re staring at a blank deck and a board meeting deadline, here’s the process we use with clients in our AI strategy engagements:
Week 1-2: Business Context
Before you think about AI, get clear on business strategy. What’s your competitive position? Where are you trying to win? What capabilities would make you more competitive? What threats keep leadership up at night?
AI strategy divorced from business strategy is just technology shopping. Start with the business.
Week 3-4: Capability Assessment
Assess your current state honestly. Data quality. Technology infrastructure. Team capabilities. Organizational readiness. This is where most companies want to skip ahead — don’t. The gap between where you are and where your strategy needs you to be is the most important input to the plan.
Week 5-6: Strategic Framework
Answer the five questions. Categorize opportunities as competitive advantage vs. commodity. Identify the data assets that matter. Decide what to build vs. buy. Develop the investment thesis. Test the assumptions.
Week 7-8: Roadmap and Governance
Translate the strategy into a sequenced roadmap with clear milestones, decision points, and kill criteria. Establish the governance structure — who makes decisions, how investments are evaluated, how progress is measured.
What you deliver to the board: Not a list of cool things AI could do. A strategic framework that connects AI investments to business outcomes, makes hard choices about priorities, and is honest about what it will take.
What Boards Actually Need to Hear
When you present your AI strategy to the board, they don’t need the technology tutorial. They need answers to three questions:
1. How does AI connect to our competitive position? Not generically — specifically. Which AI capabilities would make us harder to compete with? Which are table stakes we need just to keep up?
2. What does it cost and what do we get? Total investment over 3 years. Expected value creation. The timeline between investment and payback. The risks and what we’re doing to mitigate them.
3. What do we need to decide? Investment level. Build vs. buy trade-offs. Organizational changes. Talent strategy. The board’s job is to approve the strategy and ensure the resources are available — give them clear decisions to make.
The best board presentations on AI strategy are the shortest. If you need 60 slides to explain your strategy, you don’t have a strategy — you have a research report.
The Bottom Line
“AI strategy” has become one of the most overused and least understood terms in enterprise leadership. Most of what companies call an AI strategy is a technology shopping list disguised as strategic thinking.
A real AI strategy starts with business strategy, makes hard choices about where AI creates competitive advantage vs. commodity improvement, gets honest about data and capability prerequisites, and commits to a financial thesis that leadership will sustain through the foundation-building phase.
It should fit on 10 slides. It should say no to more things than it says yes to. And it should make everyone slightly uncomfortable — because real strategy requires trade-offs, and trade-offs mean giving up things that seem appealing.
If your AI strategy makes everyone happy, it’s not a strategy. It’s a wish list. And wish lists don’t ship.
Need help building an AI strategy that’s actually strategic? Talk to our team about our AI strategy engagements, or take our AI Readiness Assessment to understand your starting point before you plan your destination.