SaaS Sprawl in the Enterprise: How CIOs Will Consolidate Their Application Portfolios by 2026
Eva Mickler
7 min read By 2026, large enterprises will run an average of 2,191 applications—over 600 of them SaaS ...
6 Min. Read time
Edge computing in the factory is not a faith‑based debate between cloud and plant, but a question of which decision matches which latency, which data volume and which responsibility. Right on that line it is decided whether a plant architecture will still hold up in five years or turn into a quiet retrofit project.
Key takeaways
RelatedCloud Repatriation 2026: Hybrid Architecture in the CIO’s View / Reboot Germany: Three Decisions That Stay on the Board
In many plants today (as of April 2026) a mix of legacy PLC landscapes, a central MES and a cloud platform dictated by the corporation is running. The debate that has often been held in recent years – cloud or edge – was rarely helpful. At the same time, the convergence of IT and OT has established itself as its own operational topic, sitting at the table in every edge discussion. Those who listened too long either went home with a cloud‑first provision for every line or with the idea that each controller needs its own compute node. Both are wrong in factory reality.
A more useful question stays closer to the shop floor: Which decision must be made where so that the line keeps running when the network flickers and the corporate cloud hub in Frankfurt is silent for a minute? And which decision may be made centrally because it is slower, more expensive and larger‑scale to think about – such as planning, quality analysis across plants or the release of a new control parameter?
In industry, edge computing is not “a small server in the control cabinet,” but a layer that sits between automation technology and the IT platform. It ingests sensor data, control signals and video streams, processes part of it locally—e.g., for closed‑loop control, anomaly detection or visual quality inspection—and forwards only what is truly needed centrally.
Technically it’s a mix of industrial PCs or edge gateways, a container runtime, connectivity to OPC UA or field‑bus protocols, a link to corporate IT, and a plan for getting software updates all the way into the shop floor. Cloud providers and automation vendors each offer their own platforms; the spectrum spans AWS Outposts, Azure Local or Google Distributed Cloud Edge to industry suites from Siemens, Bosch Rexroth or Beckhoff. Which platform fits isn’t a matter of taste but depends on the existing automation landscape. Parts of this question overlap with the data‑sovereignty discussion presented in the Edge Computing Outlook 2026.
The honest take on the trend: in most plants edge computing isn’t a shiny new architecture but the consequence of control computers, vision systems and central platforms no longer fitting into the same pot. The interesting part isn’t the deployment itself but the question of which decisions you entrust to which part of the system.
Source: IoT Analytics, Industrial IoT Market Overview, 2024
In practice, you encounter essentially three patterns in the industrial environment. None is inherently right or wrong; they simply have very different implications for operations, investment and risk.
Architecture patterns rarely fail because of the technology. They stumble where no one has defined which decision each system is allowed to make when the network is down for five minutes or when the mechanical engineering department wants a patch at three on a Sunday.
1. Cloud‑central with thin edge. Control largely stays in the existing PLCs. The edge layer acts as a collector, sending encrypted data to the cloud where analysis, MES logic and visualization occur. This works well where real‑time requirements are moderate and network availability is high, such as assembly lines with clearly defined cycle times.
2. Edge‑autonomous with cloud central. The line continues fully even without a cloud connection; models for quality inspection or predictive maintenance run locally, while the cloud receives aggregates and handles training and redistribution. This pattern is typical for high‑data‑rate processes with low tolerance for downtime, such as visual inspection or safety‑critical functions.
3. Hybrid with a clear responsibility boundary. Edge takes over control and time‑critical analysis, while the cloud handles cross‑plant planning, plant benchmarking and AI training. It sounds like a compromise, but in practice it’s the most demanding pattern—because the boundary between the two worlds must be cleanly documented, tested and operated.
The common weak spot of all three patterns isn’t the technology but the assumption that you can decide after the fact who runs which layer. If production, IT and an external service provider share responsibility for the same edge cluster without a clear answer to who patches, restarts or escalates in an emergency, the architecture collapses long before the first hardware failure.
Because investment proposals often pit these two extremes against each other, a sober side‑by‑side comparison is worthwhile. The truth rarely lies at either extreme, yet the trade‑offs are seldom voiced in committees.
Pro
Contra
Pro
Contra
A concrete example from an anonymized project in the supplier industry illustrates this: a line with visual quality inspection was initially planned as pure cloud because the corporation already mandated a platform. After the first pilot run it became clear that the latency for the ejection logic could not reliably stay below 80 milliseconds. The final architecture opted for edge autonomy on the line, with aggregate data in the central platform. The technology was secondary; the decision was about responsibility: Who runs the line when the network is down? The answer had to be decided before the architecture.
A second pattern from practice: in a multi‑plant consortium the hybrid model was tendered because it appeared the most flexible. After twelve months of operation it turned out that the boundary between local decision and central approval had been drawn at three different points – depending on the plant and the team that delivered first. The most expensive item was not the hardware but the six months a cross‑plant governance group needed to retroactively standardize the responsibility interface. Had it been defined before the tender, the same decisions would have been made in weeks rather than quarters.
In industry, edge computing is usually discussed in committees as a technology topic. That’s the first structural mistake. Anyone who talks only about platforms, licenses and vendors overlooks three factors that later become just as costly as the hardware.
This is even more true when a plant project runs in parallel with portfolio consolidations at the corporate level – a topic that many CIOs are currently tackling under the banner Vendor‑Consolidation. Reducing at the corporate level while silently allowing new platforms in the plant merely reshuffles complexity.
The lifecycle costs of plant hardware. Edge nodes are not servers in a climate‑controlled data centre; they sit on the shop floor, often for years. Cooling, dust, vibration, maintenance windows, spare‑part availability after seven years – all of this is missing from most business cases that assume a three‑year horizon. An IT calculation based on 36 months does not match a plant lifecycle of ten to fifteen years.
Especially for plant hardware, it pays to look at a second equation that many IT budgets 2027 already reveal: a growing share of IT spend is flowing into the operation of existing systems rather than into new ones. Edge in the plant amplifies this effect when the acquisition is not tied to a clear operating concept.
The OT security reality. As soon as edge systems are docked to controllers, an isolated automation cell becomes a connected system with all the known risks. NIS2 now makes this visible at the EU regulatory level; the questions of IT‑OT separation, patch cycles and incident response on the shop floor do not become easier, they become billable. Anyone who does not embed this in the investment proposal submits an incomplete business case.
How such organisational questions around new technology layers are sorted out has become its own governance discipline – see discussion on the Chief AI Officer and his mandates. The same applies to edge in industry: technology without a clear mandate remains a project, never becomes an operation.
The question of who operates it. Many plants today have neither an IT team with OT experience nor an automation team with container know‑how. The gap in between is often written off as “we’ll handle it later via a service provider”. That may hold true in quiet years. In an incident scenario at 03:40 am, before the first coffee of the shift, it is the most expensive line an organization can sign.
These five steps won’t replace an architecture discussion, but they ensure the conversation happens in the right place—before a vendor takes control of the meeting.
Step 1: Decision-making before technology. Define per production line which decisions must be made locally (cycle times, control, safety functions) and which can remain centralised (planning, analytics, benchmarking). Only then does a platform discussion make sense.
Step 2: Simulate network and outage scenarios. How long can the line keep running without WAN? What happens if the central platform doesn’t respond for 60 minutes? Who decides when to stop or continue operating a line? No investment without answers.
Step 3: Calculate lifecycle and TCO over ten years. Edge hardware follows the asset lifecycle, not the IT lifecycle. Factor in spare parts availability, migration paths, and platform longevity—including what happens if you switch providers.
Step 4: Document operational responsibility in writing. Who patches? Who reboots? Who escalates to automation engineering, IT, and service providers? Without clearly assigned roles per plant, even the best edge stack becomes a loose end in the next audit.
Step 5: Align security assumptions with NIS2. Segmentation, logging, incident response at the plant, supplier dependencies—edge architectures that don’t address these cleanly become a problem at the first audit, not just during a real incident.
Edge computing in industry isn’t a new architectural mantra—it’s the result of manufacturing, IT, and cloud platforms no longer fitting into the same boxes. The honest C-level question isn’t “how much edge?” but: Which system gets to make which decision when conditions get tough? Draw this line clearly, and edge becomes a justifiable investment, not a matter of faith. Avoid it, and you’ll pay the price later—quietly, but inevitably.
Production lines demand real-time performance and availability that a cloud round-trip can’t reliably deliver in most factory networks. Control systems, safety functions, or visual inspections must keep running even when the WAN connection drops for minutes. Edge computing handles precisely this part, while the cloud remains responsible for planning and overarching analytics.
The terms overlap significantly. In practice, edge has become the umbrella term for all computing power located closer to machines and sensors than a traditional data center. Fog refers more to an interconnected intermediate layer. For investment decisions, the classification is secondary—what matters is which function runs where.
Under NIS2, operators of critical infrastructure and their key suppliers must meet minimum standards for risk management, incident response, and supply chain security. Edge systems that interact with control environments typically fall within scope. In concrete terms, segmentation, patch management, logging, and auditability are no longer optional—they’re mandatory compliance criteria.
In most cases, yes. The edge layer connects via OPC UA, fieldbus, or dedicated gateways without altering PLC logic. This lowers the entry barrier but requires clean alignment of data models, naming conventions, and permissions between automation and IT—often the real challenge in brownfield environments.
Once multiple applications need to run in parallel on the same hardware, updates must be deployed remotely, or models require regular redeployment, a container- and platform-based edge setup outperforms a monolithic industrial PC. The effort pays off as soon as the number of applications per line exceeds manual manageability.
Editor’s Picks
More from the MBF Media Network
Source: Pexels / Freek Wolsink (px:34222005)