Every organization has a graveyard of AI projects.
They're not officially dead. They're "in development" or "being refined" or "waiting for more data." But everyone knows the truth. These projects are never going to deliver value.
The problem isn't the technology. These projects were doomed from the start. Built on flawed assumptions, unclear objectives, or solutions looking for problems.
We've seen this pattern dozens of times. Companies pour money into AI initiatives that sound impressive in board presentations but deliver nothing in production. Meanwhile, straightforward applications that could generate real ROI sit untouched because they're not sexy enough.
Here are three AI project types that need to die. And what to build instead.
1. The "AI-Powered" Dashboard Nobody Uses
What It Looks Like
An executive attended a conference and came back excited about "AI-driven insights." IT got tasked with building a dashboard that uses machine learning to surface anomalies, predict trends, and recommend actions.
Six months and $300K later, the dashboard exists. It's technically impressive. The anomaly detection works. The predictions are reasonably accurate.
Two people log in regularly. And only to check if it's still running.
Why It Fails
The dashboard answers questions nobody was asking.
This happens when teams start with technology ("let's use ML!") instead of decisions ("what choices are we trying to improve?"). The insights don't connect to anything anyone actually does. The predictions aren't actionable because nobody has authority or process to act on them. The recommendations just sit there, ignored, because they don't fit into existing workflows.
We see this constantly. Beautiful dashboards with sophisticated AI that generate zero business value. They're disconnected from how work actually gets done.
What to Do Instead
Start with a decision, not a dashboard.
Pick one specific decision that gets made regularly in your organization. Pricing adjustments. Inventory reorders. Staffing levels. Customer escalations. Something concrete, with a clear owner, that happens on a predictable cadence.
Then ask: what information would make this decision better? What predictions would be valuable? What's the cost of getting it wrong today?
Build AI that improves that one decision. Measure whether the decision actually gets better. Not whether the model is accurate, but whether business outcomes improve.
Then expand.
The difference between a useless AI dashboard and a valuable one isn't the sophistication of the algorithms. It's whether someone changes their behavior based on what it shows them.
2. The Chatbot That Makes Customers Angrier
What It Looks Like
Customer service costs are high. Someone suggests an AI chatbot to "deflect" tickets and reduce call volume. The business case looks compelling. If we can handle 30% of inquiries automatically, we save $500K annually.
The chatbot launches. It can answer FAQs, check order status, and handle password resets.
Six months later: call volume is unchanged. Customer satisfaction scores dropped. Support agents are frustrated because they're handling the same number of calls, but now customers are already angry from fighting with a bot first.
The chatbot handled plenty of conversations. But it handled the easy ones. Questions customers could already answer themselves with the website's search function. The hard questions, the ones actually driving call volume and cost, still need humans. And now there's an extra step of frustration before customers reach them.
Why It Fails
Most customer service chatbots are built on a flawed assumption: that a big chunk of support volume consists of simple questions that could be automated.
In reality, customers who can solve their own problems usually do. They Google it, check the FAQ, or figure it out. The ones who contact support have already tried the easy routes. Their issues are complex, emotional, or require judgment that bots can't provide.
Deflection-focused chatbots optimize for the wrong metric. They measure "conversations handled" instead of "problems solved." They reduce easy tickets that weren't costing much anyway while doing nothing about the complex issues that actually drive support costs.
Worse, they add friction. Customers who know they need a human have to prove it to a bot first. That frustration bleeds into every interaction that follows.
What to Do Instead
Use AI to augment agents, not replace them.
The real opportunity in customer service AI isn't deflection. It's acceleration and quality improvement.
Real-time suggestions that help agents resolve issues faster. Automatic summarization so agents don't waste time reading through ticket history. Intelligent routing that gets complex issues to specialists immediately instead of bouncing between generalists. Sentiment analysis that flags escalating situations before they explode.
These applications keep humans in the loop while making them dramatically more effective. Handle time drops. First-call resolution improves. Customer satisfaction goes up. Agent burnout goes down.
The ROI is real and measurable. Not based on optimistic deflection projections, but on actual improvements in operational metrics.
If you must build a chatbot, measure it on customer outcomes. Issues resolved, satisfaction scores, escalation rates. Not conversations handled.
3. The Predictive Model Nobody Trusts
What It Looks Like
The data science team built a model that predicts which customers will churn, which deals will close, or which equipment will fail. The model is accurate. Validated on historical data, properly backtested, technically sound.
Nobody uses it.
Sales ignores the churn predictions because "I know my customers better than an algorithm." Operations dismisses the maintenance alerts because they've been wrong before. Once, memorably, and everyone still talks about it. Executives request the model's output for reports but make decisions based on gut feel anyway.
The data science team is frustrated. They built what was asked for. The business is frustrated. They invested in AI and got nothing.
Why It Fails
Accuracy isn't the same as trust. Predictions aren't the same as decisions.
Most predictive models fail not because they're wrong, but because they're disconnected from how decisions actually get made. They output a probability or a score, but they don't tell users what to do about it. They don't explain why they reached their conclusion. They don't account for information the user has that the model doesn't.
When a model says a customer will churn but the account manager just had a great call with them last week, who's right? The model might be. Statistically speaking, that call probably doesn't matter much. But the account manager has no way to evaluate that. They just see a prediction that contradicts their experience, so they ignore it.
Trust is earned through explanation, transparency, and a track record of being right in ways that matter. Most models get deployed without any of these.
What to Do Instead
Build for adoption, not just accuracy.
Before you deploy any predictive model, answer these questions:
- Who will act on this prediction, and what specifically will they do differently?
- How will users evaluate whether a prediction is worth trusting in any specific case?
- What information do users have that the model doesn't, and how will they incorporate it?
- How will you demonstrate that acting on predictions leads to better outcomes?
Then build accordingly.
Add explainability. Not just feature importance charts for data scientists, but plain-language explanations that make sense to the people using the output. "This customer is flagged because their usage dropped 40% last month and they haven't responded to the last two outreach attempts."
Create feedback loops. When users override the model, capture why. Use those overrides to improve the model and to identify cases where human judgment really does add value.
Start with decisions where the model can be advisory rather than authoritative. Let users build trust gradually rather than asking them to hand over judgment to an algorithm on day one.
And measure adoption, not just accuracy. A model that's 80% accurate and actually used beats a model that's 95% accurate and ignored.
Sound familiar? If your organization is pouring money into AI projects that aren't delivering, you're not alone. The first step is an honest assessment of what's working and what isn't.
The Common Thread
All three of these failed project types share the same root cause: starting with technology instead of problems.
The dashboard started with "let's use AI for insights" instead of "what decisions are we trying to improve?"
The chatbot started with "let's automate customer service" instead of "what's actually driving support costs and how might we address it?"
The predictive model started with "let's predict X" instead of "what would someone do differently if they knew X, and would that action actually be possible and valuable?"
When you start with technology, you end up building impressive capabilities that nobody uses. When you start with problems, you often end up with simpler solutions that actually work.
How to Kill These Projects
Killing a project is politically difficult. Someone's reputation is attached to it. Budget has been allocated. Progress has been reported.
But zombie projects are worse than dead projects. They consume resources, attention, and organizational patience for AI initiatives. Every failed project makes the next one harder to fund.
Here's how to kill gracefully:
Reframe as learning, not failure
Every project generates insights about your data, your processes, your organization's readiness. Document those insights. They have value even if the project didn't succeed.
Quantify the ongoing cost
Projects that are "almost done" or "just need a few more months" often linger forever. Calculate what you're spending. Not just dollars, but attention, opportunity cost, and organizational credibility.
Redirect to something valuable
Don't just kill a project. Announce what you're doing instead. "We're pivoting from the customer service chatbot to an agent assistance tool" is easier to accept than "we're shutting down the chatbot."
Get executive cover
The people closest to a project often can't kill it because their careers are attached to it. Leadership needs to create permission to fail fast and redirect resources.
What to Build Instead
If you're looking for AI projects that actually deliver value, focus on:
Process acceleration: AI that makes existing workflows faster without requiring behavior change. Document processing, data entry, summarization, routing.
Decision support: AI that provides useful input to human decisions without trying to replace human judgment. Recommendations, risk flags, relevant precedents.
Quality improvement: AI that catches errors, ensures consistency, or improves outputs. Code review, content checking, data validation.
Insight discovery: AI that surfaces patterns humans wouldn't find on their own, presented in ways that connect to actionable decisions.
Notice what these have in common: they augment human work rather than replacing it, they fit into existing processes, and they deliver value even if they're not perfect.
The unsexy AI projects are usually the ones that work.
Ready to Audit Your AI Portfolio?
If you suspect your organization has AI projects that should be killed, or if you want to make sure your next initiative doesn't end up on this list, start with an honest assessment.
Our AI Readiness Assessment evaluates not just your technical capabilities, but your organizational readiness to actually use AI effectively. We'll tell you what's working, what isn't, and where to focus next.
Or, if you want a comprehensive review of your current AI initiatives with specific recommendations, let's talk.
Ryshe is an AI and data consultancy backed by Wiley|Wilson's 125 years of engineering excellence. We help organizations build AI that actually works. And we'll tell you the truth about which projects should be killed.