Perspective 12 min read April 14, 2026

AI Won't Fix Your Broken Processes — It Will Amplify Them

Companies rush to apply AI to broken processes and wonder why things get worse. AI doesn't fix dysfunction — it automates it at scale. Here's why process improvement must come before automation.

Alex Ryan
Alex Ryan
CEO & Co-Founder

A building products manufacturer asked us to help automate their customer complaint routing with AI. The idea was simple: incoming complaints arrive by email and phone, an AI classifies them by type and severity, and routes them to the right team for resolution.

We asked to see the current routing process. What we found was chaos.

Complaints were classified into 23 categories — but 8 of those categories overlapped. “Product defect” and “quality issue” meant the same thing, but different reps used different codes depending on who trained them. Severity levels existed but had no clear criteria — “high” meant “the customer is angry” to one team and “the product failed in the field” to another. Routing rules sent structural complaints to engineering and cosmetic complaints to production, except when the customer was a national account, in which case everything went to the key accounts team, except on weekends, when everything went into a general queue that nobody checked until Monday.

We looked at this and said: “If we build an AI to automate this process, we’ll be automating confusion at the speed of light.”

They didn’t have an AI problem. They had a process problem. This is one of the core reasons AI pilots fail. And bolting AI onto that process wouldn’t solve it — it would amplify every inconsistency, every ambiguity, and every workaround at a scale that makes manual dysfunction look quaint.


The Amplification Problem

Here’s what most AI vendors won’t tell you: AI is an amplifier, not a fixer. It takes whatever process you feed it and does more of it, faster. If the process is good, you get more good outcomes, faster. If the process is broken, you get more broken outcomes, faster.

This sounds obvious when you say it out loud. But it’s amazing how many companies skip right past it. The excitement of AI — the demos, the vendor promises, the board pressure to “do something with AI” — creates a gravity that pulls teams straight from “we have a problem” to “let’s apply AI to it” without stopping at “let’s understand the problem first.”

What amplification looks like in practice:

  • Bad routing rules automated = complaints going to the wrong team at 10x the speed, with no human in the loop to catch the error
  • Inconsistent data entry automated = an AI trained on garbage data confidently producing garbage classifications
  • Undocumented exceptions automated = the AI doesn’t know about the workarounds your team uses, so it routes edge cases incorrectly every single time
  • Conflicting business logic automated = the AI picks one interpretation and applies it consistently, which means it’s consistently wrong 50% of the time

When a human follows a broken process, they compensate. They use judgment, institutional knowledge, and workarounds to get to the right outcome despite the process. When an AI follows a broken process, it follows it literally. Every flaw. Every inconsistency. Every gap. At scale.


The 5 Process Smells That Should Stop Any AI Project

Before you build AI on top of any process, check for these red flags. If you find them, fix the process first.

1. Multiple People Describe the Process Differently

Ask five people how the process works. If you get five different answers, you don’t have a process — you have five individual workflows wearing a trench coat pretending to be a process.

What this means for AI: The AI needs one consistent process to learn from. This connects directly to the AI readiness gap — organizational maturity matters more than technology. If the training data reflects five different workflows, the model will learn an average of all five — which matches none of them. Predictions will seem plausible but will be subtly wrong in ways that are hard to diagnose.

The fix: Before automation, standardize. Get the five people in a room. Document the actual process — not the idealized version, the real one. Reconcile the differences. Then standardize. The AI can learn from the standardized process, and the standardization itself often reveals improvement opportunities that reduce the scope of what needs to be automated.

2. The Process Has More Exceptions Than Rules

Some processes have evolved so many exceptions that the exception handling is more complex than the main flow. “Route to Team A, unless it’s a national account, unless it’s under warranty, unless it involves a recalled product, unless the customer has an open escalation, unless…”

What this means for AI: Exception-heavy processes are hard to automate reliably because the exception logic is often undocumented, subjective, or dependent on context that’s not in the data. The AI will handle the main flow fine and botch the exceptions — which are usually the highest-stakes cases.

The fix: Map every exception. Ask “why does this exception exist?” For each one, determine: Is this a legitimate business rule that should be part of the standard process? Or is this a workaround for a problem that should be fixed upstream? Eliminate the workarounds. Codify the legitimate rules. Reduce the exception count by 50-70% before automating.

3. The Outputs Don’t Match Across Systems

If the same process produces different results depending on which system captures it — different totals, different classifications, different counts — the process has a data integrity problem that AI will inherit and scale.

What this means for AI: An AI trained on data from System A will make different predictions than one trained on data from System B, even though both systems supposedly capture the same process. In production, the AI’s output will conflict with one or both systems, eroding trust.

The fix: Reconcile the systems. Identify the source of truth. Fix the data integrity issues. Then automate.

4. Success Depends on Specific People

If the process only works well when Maria is running it, and falls apart when she’s on vacation, the process isn’t a process — it’s Maria. Her institutional knowledge, her relationships, her judgment calls are doing the work that the process documentation should be doing.

What this means for AI: You can’t train an AI on Maria’s judgment if Maria’s judgment isn’t captured anywhere. The AI will learn the documented process, which is the version that doesn’t work without Maria.

The fix: Extract Maria’s knowledge. Document her decision-making criteria, her exception handling, her prioritization logic. Embed it in the process documentation and the business rules. Then — and only then — can an AI learn to approximate what Maria does.

5. Nobody Can Define “Good” Output

If you ask “how do we know this process is working well?” and the answer is vague — “we just know” or “when nobody complains” — you have a measurement problem. Without clear success criteria, you can’t train an AI (what does it optimize for?), you can’t validate its output (how do you know it’s right?), and you can’t measure improvement (better than what?).

What this means for AI: An AI without clear success metrics is an AI without guardrails. It might optimize for the wrong thing entirely — minimizing complaints by routing everything to the most responsive team, for example, instead of routing to the team best equipped to fix the problem.

The fix: Define success metrics for the process. Be specific. “Complaints resolved within 48 hours” is measurable. “Customer satisfaction” without a number isn’t. These metrics become the AI’s objective function and your validation criteria.


The Process Audit: What to Do Before Any AI Project

Every AI project should start with a process audit. Not a six-month Lean Six Sigma engagement — a focused, practical assessment of the process you’re about to automate.

Step 1: Map the Current State (As-Is)

Document the process as it actually works, not as it’s supposed to work. Include:

  • Every step, including informal ones (“then I check with Sarah”)
  • Every decision point, including the criteria used (especially the unwritten ones)
  • Every exception and how it’s currently handled
  • Every system involved and how data flows between them
  • Every handoff between people or teams, including the informal communication that makes handoffs work

This takes 1-2 weeks of observation and interviews. It’s not glamorous. It’s essential.

Step 2: Identify the Dysfunction

With the current-state map in hand, categorize each element:

  • Works well, automatable: These are candidates for AI. The process is consistent, documented, and produces reliable results.
  • Works well, not automatable: These require human judgment that can’t be reasonably captured. Keep them manual.
  • Doesn’t work well, fixable: These need process improvement before automation. Fix them first.
  • Doesn’t work well, needs redesign: These are fundamentally broken. No amount of AI will help. Redesign from scratch.

Step 3: Fix Before You Automate

Address the “doesn’t work well” categories. This might mean:

  • Standardizing inconsistent procedures
  • Documenting tribal knowledge
  • Eliminating unnecessary exceptions
  • Reconciling conflicting business rules
  • Defining clear success metrics
  • Fixing data quality issues at the source

Step 4: Design the Future State (To-Be)

Now design the AI-enabled process — starting from the improved process, not the broken one. This should clearly show:

  • Where the AI operates and what decisions it makes
  • Where humans are in the loop and what triggers escalation
  • How AI outputs are validated
  • What monitoring is in place to catch problems
  • How the process handles AI errors gracefully

A Real Example: How Fixing the Process Cut the AI Scope in Half

A metal fabrication company wanted to automate their quoting process. Sales reps received RFQs, estimated material costs, calculated labor hours, and produced quotes. The company wanted an AI to generate quotes automatically from incoming RFQs.

When we mapped the current process, we found:

  • Three different estimating methods depending on which sales rep handled the RFQ. One used historical job data. One used a cost-plus formula. One “just knew” from experience.
  • No standard material pricing. Each rep maintained their own pricing spreadsheet, updated at different intervals, with different supplier prices.
  • Inconsistent labor estimates. The same part geometry got wildly different labor hour estimates from different reps — variations of 40% or more on identical work.
  • Exception-heavy approval routing. Quotes over $50K needed VP approval, except for existing customers with a history of similar orders, except when the margin was below 25%, except during Q4 when the threshold dropped to $30K.

If we’d built an AI on top of this process, it would have learned three different estimating methods, used inconsistent material prices, produced labor estimates with 40% variance, and confused itself on the approval routing.

Instead, we spent six weeks on process improvement:

  1. Standardized the estimating method — adopted the cost-plus approach with historical data validation. One method, documented, with clear inputs and formulas.
  2. Centralized material pricing — built a single pricing database updated weekly from supplier feeds. One source of truth.
  3. Calibrated labor estimates — analyzed historical job data to build standard labor rates by part type, material, and complexity. Validated with the shop floor.
  4. Simplified approval routing — reduced from 12 exception paths to 3 clear rules based on dollar value and margin threshold.

After the process improvement, the AI project scope dropped dramatically:

  • Original estimate: $280K, 16 weeks, covering material estimation, labor estimation, pricing, and approval routing with extensive exception handling
  • Post-improvement estimate: $140K, 9 weeks, covering automated quote generation from standardized inputs with simple rule-based routing

The process improvement cost about $45K. It saved $140K on the AI project and resulted in a system that was more accurate, more maintainable, and more trusted by the sales team — because it was built on a process that actually made sense.

The cheapest AI project is the one whose scope was reduced by fixing the underlying process first. Half the cost, half the timeline, twice the trust.


Why Companies Skip the Process Work

If fixing the process first is so clearly the right move, why do companies skip it?

The AI budget exists, but the process improvement budget doesn’t. AI has executive sponsorship and allocated funds. Process improvement sounds like operational housekeeping. So the team uses the AI budget and works around the process problems — adding complexity, cost, and risk to the AI project that could have been avoided.

Vendors don’t recommend it. AI vendors make money building AI, not fixing processes. We’ve written about hiring an AI consultant vs. building in-house — the right partner tells you to fix the process first. They’ll accommodate your broken process with custom logic, exception handling, and “flexibility” that inflates the project scope (and the invoice). Few will tell you to go fix your process first and come back in six weeks.

It’s not exciting. “We spent two months standardizing our complaint categories” doesn’t make for a good LinkedIn post. “We deployed AI-powered complaint routing” does. The incentive structure favors technology over the process work that makes technology effective.

People underestimate the dysfunction. When you live inside a broken process every day, it feels normal. The workarounds are second nature. The exceptions are “just how we do things.” It takes an outside perspective to see that the process is fundamentally broken.


The Process-First Framework

Here’s the framework we use with every client considering intelligent automation:

1. Observe Before Proposing

Spend time watching the actual process in action. Not reading the SOP — watching people work. The gap between documented and actual is where the dysfunction lives.

2. Quantify the Dysfunction

Don’t just identify problems — measure them. How many rework cycles? What’s the error rate? How much time is spent on workarounds? How often do exceptions occur? These numbers make the case for process improvement and establish the baseline for measuring AI impact.

3. Fix the Foundation

Standardize, simplify, and document. This isn’t a transformation project — it’s a cleanup. Target 4-8 weeks for most processes. If it takes longer, the process probably needs a redesign, not a cleanup.

4. Then Automate

Build the AI on the improved process. The scope will be smaller. The data will be cleaner. The training will be faster. The adoption will be higher because the process makes sense to the people using it.

5. Measure Against the Baseline

Compare results against the pre-improvement baseline, not the pre-AI baseline. This tells you how much value the AI added on top of the process improvement — which is the honest measure of whether the AI investment was worth it.


The Bottom Line

AI is a powerful amplifier. It takes whatever you give it and does more of it, faster. If you give it a well-designed process with clean data and clear rules, it amplifies efficiency. If you give it a broken process with inconsistent data and conflicting logic, it amplifies chaos.

The companies that get the most value from AI aren’t the ones with the most sophisticated models. They’re the ones with the most well-designed processes. They do the boring work of standardizing, documenting, and simplifying before they do the exciting work of automating.

If your process doesn’t work well when humans run it, AI won’t fix it. AI will do it wrong, faster, and with more confidence.

Fix the process first. Then automate.


Not sure which of your processes are AI-ready and which need work first? Talk to our team about a process readiness assessment, or take our AI Readiness Assessment to see where your organization stands.

AI StrategyProcess ImprovementIntelligent AutomationOperations

If this is the kind of thinking you want in your inbox, The Logit covers AI strategy for industrial operators every two weeks. No vendor content. No hype. Just honest takes from practitioners.

Subscribe to The Logit
Alex Ryan
About the author
Alex Ryan
CEO & Co-Founder at Ryshe

Alex Ryan is CEO of Ryshe, where he helps engineering and manufacturing companies build the data foundations that make AI projects actually deliver. He's spent over a decade in the gap between what vendors promise and what ships to production. He's learned to tell clients what they need to hear, not what they want to hear.

Want to Discuss This Topic?

Let's talk about how these insights apply to your organization.