Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham
TL;DR
- Between 80 and 95 percent of all corporate AI projects never reach production – despite rising budgets.
- “AI-first” systematically leads companies to buy technology first, then hunt for problems it might solve.
- Successful companies adopt vertical prioritization: fully solving one concrete problem before moving to the next use case.
- Augmentation – not replacement – generates more business value, as illustrated by Klarna’s cautionary tale: its radical AI push led directly to declining service quality.
- A “problem-first” filter – three core questions asked before every project launch – reduces failure rates and boosts the impact of AI investments.
Most documented AI implementations in enterprises fail to deliver promised results. Exact figures vary by source – but whether you read MIT research or industry analysts,
80 to 95 percent of projects stall, are quietly shelved, or never make it to production. At the same time, most boards are increasing their AI budgets for 2026. That’s not a contradiction – it’s a warning signal. Because hope is not a strategy.
The automotive industry offers vivid examples. Volkswagen’s software subsidiary Cariad burned through billions before the group pulled the plug – not because the technology didn’t exist, but because the organization had
no clear, prioritized problem to solve. Instead, everything was supposed to be transformed at once. The result? Plenty of infrastructure, little output. Cariad isn’t an outlier. It’s the pattern.
Why “AI-First” Systematically Fails
The error begins with framing. “AI-first” sounds decisive and innovative. In operational reality, however, it usually means: a company buys AI tools and
then looks for problems they might fit. That’s the
exact reversal of every proven innovation logic. Technology doesn’t solve problems no one has articulated.
The first symptom is solution without problem. Departments receive budgets to “do something with AI.” They evaluate tools, build demos, present at internal Innovation Days. But no one asked beforehand:
Which specific process do we want to improve? By how much? And how will we measure it? The result: pilot projects that work technically – but generate
zero demonstrable business value.
The second symptom is horizontal PoC overload. Many companies launch ten, twenty, or more proof-of-concepts simultaneously across departments. Each team works with different data, different vendors, and different success criteria. The outcome? A portfolio of half-finished experiments – none of which clears the scalability hurdle. Not because the technology failed, but because no one did the organizational groundwork – data infrastructure, governance, change management.
The third symptom is the absence of real success metrics. “We’re now using AI” is a press release – not a KPI. As long as companies measure AI project success by whether the project exists at all – rather than by
measurable improvements in cycle time, error rate, or cost reduction – their investments remain acts of faith. And faith doesn’t scale.
What Successful Companies Do Differently
The few companies whose AI projects demonstrably succeed share one trait: they treat AI not as a strategy, but as an
engineering discipline. The distinction sounds subtle – but it’s fundamental.
Vertical prioritization instead of horizontal scattering. Rather than rolling out AI everywhere at once, these companies identify one concrete process with high pain potential and strong data availability. They solve
that single problem end-to-end – from data cleaning and model training to integration into the live workflow. Only then does the next use case begin.
Clear business cases before the first prompt. Before even connecting an API, a quantified business case is already in place – detailing expected benefits in hard numbers. Not vague categories like “efficiency gains,” but concrete targets: cutting processing time from 48 to 12 hours; reducing error rates from 8% to 2%; saving €200,000 per quarter. If you can’t name those numbers, you don’t have an AI project – you have a hypothesis.
Augmentation instead of replacement. This brings us full circle to the Klarna debate, which surged again in mid-2025. Klarna had adopted “AI-first” as its corporate motto – and cut staff accordingly. The result: deteriorating service quality, mounting customer frustration, and a CEO publicly admitting the math didn’t add up. The counter-model? Companies deploying AI to
augment human work. A claims specialist handling twice as many cases with AI support delivers more value than a bot incorrectly resolving half of them.
The Problem-First Framework: Three Questions Before Every AI Project
To lower your AI initiative failure rate, you don’t need a new tool – or another consultant. You need
discipline around three questions, each answered honestly before any project kicks off.
Question one: Which concrete, measurable problem does this project solve? If the answer is “We want to use AI,” that’s not a problem – it’s a means. Back to square one.
Question two: What is the current baseline value – and what’s the target? Without a baseline, there’s no way to measure progress. If you don’t know how long a process takes today, you can’t judge whether AI speeds it up. Establishing that baseline is often harder than building the model itself – which is precisely why it gets skipped.
A fatal mistake.
Question three: What happens if the AI gets it wrong? Every AI system has an error rate. The question isn’t
whether errors occur – but whether there’s a fallback plan. Misclassifying a claim may be an annoyance in insurance damage assessment. In medical diagnostics, it’s a catastrophe.
These three questions aren’t workshop innovation frameworks. They’re filters. Answer them honestly, and you’ll likely find that only
three of ten planned AI projects pass muster. That’s the point. Three well-conceived, production-ready projects beat thirty pilots that die in slide decks.
Checklist: Ready for AI Projects – or Just AI Announcements?
Five questions for an honest two-minute reality check:
- Can you name a concrete, measurable target value for every active AI project?
- Does every project have a designated process owner accountable for success – not just for the tech?
- Do you have a documented data baseline for the processes AI is meant to improve?
- Is there a fallback strategy if the AI system fails – or delivers erroneous outputs?
- Are your AI initiatives measured against business metrics – or against the number of projects launched?
If you can answer “yes” to at least four of these, you’re already operating problem-first. Everyone else should stop expanding their AI roadmap – and start sharpening it.
Lesser projects, more impact. Less vision, more engineering. Less “AI-first,” more “Problem-first, AI-enabled.” This isn’t a brake on innovation. It’s the prerequisite for innovation that actually lands.
Frequently Asked Questions
Why do so many AI projects fail in enterprises?
The most common cause is a lack of problem definition. Companies procure AI tools without first articulating a concrete, measurable use case. Compounding factors include poor data quality, missing governance, and unclear success metrics.
What does “Problem-first, AI-enabled” mean?
“Problem-first, AI-enabled” means starting with a clearly defined, high-impact business challenge – and then selecting, adapting, and integrating AI tools specifically to solve it. AI serves the problem, not the other way around.
What’s the difference between augmentation and replacement in AI?
Augmentation means AI supports and enhances human work – e.g., accelerating data analysis or suggesting next steps. Replacement fully substitutes human labor with AI. Studies and real-world examples – including Klarna – show augmentation often delivers superior outcomes.
How do I build a business case for an AI project?
Start by identifying the precise process you aim to improve – and quantify its current performance (baseline). Define a realistic, measurable target (e.g., 40% faster resolution, 75% fewer manual interventions). Estimate cost savings, revenue uplift, or risk reduction in monetary terms. Crucially, include implementation costs, timeline, and fallback plans – not just technical feasibility.
What role does data quality play in AI implementation?
Data quality is the foundational requirement. Without clean, structured, and complete data, no AI model can operate reliably. Establishing the data baseline and building data infrastructure is often more demanding than training the model itself.
What can enterprises learn from the Klarna example?
Klarna’s experience shows that treating AI as a cost-cutting lever – rather than a capability enhancer – can backfire. Rapid workforce reduction without parallel investment in human-AI collaboration eroded service quality and customer trust. Sustainable AI adoption requires augmenting people, not replacing them.
How many AI projects should a company run in parallel?
Less is more. Successful companies prioritize vertically: they fully solve one problem before tackling the next use case. Three rigorously executed projects with measurable impact are worth far more than thirty parallel proof-of-concepts.
Header Image Source: Unsplash / Scott Graham