18.02.2026

4 min Reading Time

More and more companies are moving workloads back from the public cloud into their own data centers or private-cloud environments. This trend – known as cloud repatriation – runs counter to the long-standing “cloud-first” mantra. For CIOs, however, this isn’t a step backward; it’s a sign of strategic maturity. Organizations that understand precisely where each workload belongs optimize costs, compliance, and control simultaneously.

TL;DR

  • 📊 Confirmed countertrend: According to IDC, 71 percent of enterprises have already repatriated workloads from the public cloud. The trend is accelerating in 2026 due to cost pressure and tightening regulation.
  • 💰 Cost savings up to 50 percent: Companies like 37signals report savings of up to $7 million over five years by shifting from AWS to owned hardware.
  • 🔒 Regulation drives repatriation: NIS2, DORA, and the EU AI Act make data sovereignty mandatory. Certain workloads may not be hosted in U.S.-controlled clouds.
  • ⚠️ No one-size-fits-all answer: Repatriation isn’t suitable for every workload. AI training, global scaling, and burst capacity remain core cloud strengths.
  • 🎯 Workload placement matrix: CIOs need a data-driven decision framework: Which workload belongs where – and why?

Why Companies Are Leaving the Cloud

“Cloud-first” defined the past decade. Enterprises migrated workloads en masse to the public cloud, lured by promises of flexibility, scalability, and freedom from hardware management. Yet after years of operation, downsides have emerged: runaway costs, vendor lock-in that impedes migration, and compliance requirements increasingly at odds with U.S.-controlled hyperscalers.

IDC found in a recent study that 71 percent of enterprises have already repatriated workloads from the public cloud. Motivations vary: cost optimization ranks first, followed by data sovereignty and performance demands. In the DACH region (Germany, Austria, Switzerland), regulatory developments play an ever-larger role: NIS2, DORA, and the EU AI Act impose strict data control requirements that certain cloud configurations simply cannot meet.

The most prominent example is 37signals – the company behind Basecamp and HEY. Founder David Heinemeier Hansson documented its move from AWS to owned hardware, projecting $7 million in savings over five years. While this case isn’t directly transferable to German mid-sized businesses, it illustrates a fundamental principle: beyond a certain scale and workload predictability, owning infrastructure becomes more economical than renting cloud capacity.

“In most cases, the decision to repatriate workloads from the public cloud is a cost decision. Enterprises with predictable workloads can save 30 to 50 percent by running them on owned infrastructure.”
IDG/Supermicro Cloud Survey (2024)

The Three Drivers of Repatriation

Driver 1: The cost truth after years of cloud usage. Many organizations only calculated their true total cost of ownership (TCO) after several years of cloud operations. Egress fees, storage charges, premium support contracts, and the labor cost of cloud architects all add up. Flexera estimates average cloud waste at 29 percent. For predictable, steady-state workloads, owned infrastructure is often 30-50 percent cheaper than an equivalent cloud configuration. The break-even point depends on scale – but typically falls between 100 and 200 virtual machines or their container equivalent.

Driver 2: Regulatory mandates for data sovereignty. The EU Data Act, NIS2, and the EU AI Act tighten requirements around data control. Meanwhile, the U.S. CLOUD Act permits U.S. authorities to access data held by U.S. companies – even if servers reside physically in the EU. An AWS or Microsoft data center located in the EU does not automatically shield data from such access. For sensitive data in regulated industries, the choice narrows to: sovereign cloud or owned infrastructure, tertium non datur.

Driver 3: Performance and latency. Certain workloads benefit significantly from physical proximity to data sources. Industrial IoT applications, real-time manufacturing analytics, and specific AI inference tasks demand sub-10-millisecond latencies – something cloud architectures don’t always guarantee. Edge computing and on-premises infrastructure deliver advantages here that cloud solutions simply cannot match.

The Workload Placement Matrix

Cloud repatriation doesn’t mean bringing all workloads back. CIOs need a nuanced framework to evaluate each workload individually. The decision hinges on four factors: cost, compliance, performance, and scalability.

71 %
have repatriated workloads
32 %
average cloud waste
7 Mio. $
Savings for 37signals over 5 years

Sources: IDC 2025, Flexera State of Cloud 2026, 37signals Blog

Cloud remains the right choice for: GPU-burst-intensive AI training, globally distributed applications with variable traffic, disaster recovery and backup, development and test environments, and SaaS applications with mature ecosystems. Here, scalability and flexibility outweigh the cost premium.

Repatriation pays off for: Stable production workloads with predictable load, I/O-intensive databases, regulatorily sensitive data in the DACH region, legacy applications that would require costly cloud-native rewrites, and HPC workloads with sustained GPU utilization above 70 percent. These benefit from lower unit costs and full data control.

The hybrid approach as the optimal path: Most enterprises will settle on a hybrid model – regulatorily critical and cost-intensive workloads on owned or sovereign infrastructure; variable and global workloads in the public cloud. The challenge lies not in making the decision – but in executing it: network interconnectivity, data synchronization, and unified management across both environments demand investment in platform engineering and multi-cloud expertise.

How CIOs Should Present Repatriation to the Board

The biggest hurdle is communication. Saying “We’re leaving the cloud” sounds like regression. CIOs must frame the message as strategic maturity: “We’re optimizing our IT architecture based on four years of cloud experience. Workloads that cost more in the cloud than on owned infrastructure – and that don’t leverage any cloud-specific advantages – we’re bringing back. That saves X euros annually, while maintaining or even improving compliance.”

The business case must be concrete: TCO comparisons per workload over three to five years, compliance gains through data sovereignty, performance improvements for latency-sensitive applications, and risk reduction via decreased vendor lock-in. With these numbers, cloud repatriation becomes the logical outcome of data-driven decision-making – not an admission of strategic failure.

Frequently Asked Questions

What is cloud repatriation?

Cloud repatriation refers to moving workloads from the public cloud back into an organization’s own data centers, private-cloud environments, or colocation facilities. The trend is driven by cost optimization, regulatory requirements, and performance considerations.

Which workloads benefit most from repatriation?

Stable production workloads with predictable load, I/O-intensive databases, regulatorily sensitive data, and legacy applications see the greatest benefit. The typical break-even point lies between 100 and 200 virtual machines – or their container equivalent.

How large are the potential savings?

Depending on workload profiles, enterprises report 30-50 percent cost reductions for predictable workloads. The most widely cited example is 37signals, projecting $7 million in savings over five years. Actual savings depend on the specific workload structure.

Does repatriation contradict a cloud strategy?

No. Repatriation isn’t an anti-cloud statement – it signals strategic maturity. Most enterprises adopt a hybrid model: regulatorily critical and cost-intensive workloads run on owned infrastructure; variable and global workloads stay in the public cloud.

What risks does cloud repatriation entail?

Key risks include the upfront capital investment for owned infrastructure, the need for specialized operational talent, and the complexity of managing a hybrid architecture. CIOs should begin with a TCO analysis spanning at least three years before committing.

More from the MBF Media Network

Header Image Source: Brett Sayles / Pexels

Share this article:

More Articles

11.04.2026

Chief AI Officer 2026: Real Role or Just Another C-Level Title?

Tobias Massow

⏳ 9 min read The Chief AI Officer is the most frequently announced-and least understood-C-level ...

Read Article
10.04.2026

Cloud Repatriation 2026 Is a Statistical Illusion

Benedikt Langer

7 Min. Lesezeit "86 Prozent der CIOs planen Cloud Repatriation" lautet die Überschrift, die sich seit ...

Read Article
08.04.2026

AI Governance 2026: Only 14% Have Clarified Who Is Responsible

Tobias Massow

7 Min. Reading Time 87 percent of companies are increasing their AI (Artificial Intelligence) budgets. ...

Read Article
07.04.2026

18 Percent Pay Gap, an EU Deadline, and Little Preparation: Salary Transparency from June 2026

Benedikt Langer

8 min. reading time Starting June 2026, salary ranges must appear in job postings. Inquiring about current ...

Read Article
06.04.2026

Cyber Insurance 2026: Premiums Doubled, Coverage Halved – The Calculation No CFO Wants to See

Benedikt Langer

6 Min. Read 15.3 billion US dollars in premium volume, a 15 to 20 percent price increase for 2026, and ...

Read Article
05.04.2026

IT Budget 2027: Three Quarters for Operations – That’s the Problem

Benedikt Langer

6 min read By 2026, companies worldwide will spend $6.15 trillion on IT. That sounds like an unprecedented ...

Read Article
A magazine by Evernine Media GmbH