Smart City Governance 2026: What CIOs Can Learn from Germany’s City Digital‑Infrastructure Lag
Tobias Massow
7 Min. Reading Time The MDPI Smart City Maturity Study 2026 examined 1.136 German municipalities. The ...
8 Min. Read
Seven out of ten employees in German companies use AI tools that their IT department has never approved. Company secrets end up in external training sets, liability issues remain open, and from August 2026, the EU AI Act will turn a governance oversight into a compliance risk with penalties of up to 35 million Euro. Three measures immediately create visibility – without a major project, without additional budget.
Key Takeaways
What is Shadow AI? Shadow AI refers to the use of AI tools by employees without the knowledge or approval of the IT department – analogous to Shadow IT, but with significantly higher data and liability risks, because many systems can process entered content for training.
The Logicalis CIO Report 2024 delivers a sobering figure: 62% of surveyed CIOs admit to already making compromises in AI governance. Not because they don’t see the problem, but because daily operations move faster than any governance initiative. Meanwhile, employees have made their own decision: They are not waiting.
ChatGPT, Claude, Gemini, Perplexity, Copilot variants from the App Store – the list of unapproved tools running daily in companies is growing faster than any inventory tool can track. Current market data shows: 78% of knowledge workers use at least one AI tool that their IT department has never seen.
The real danger does not lie in the tool itself. It lies in what employees input: contract excerpts, customer data, internal analyses, strategy documents. If an employee uploads a contract draft to a consumer AI tool to have it summarized, this content may end up in the training set of the next model or on servers outside the EU.
78%
Why August 2026 is a Hard Deadline
The EU AI Act introduces binding obligations starting August 2026. AI systems falling into high-risk areas – HR decisions, credit scoring, critical infrastructure – are subject to stringent documentation and audit requirements. Anyone without an inventory of their AI usage by then will be unable to demonstrate which systems fall into which risk category.
The penalties are not theoretical: up to 35 million Euro or 7% of global annual turnover. For a company with 500 million Euro in revenue, that would be 35 million Euro. Supervisory authorities will initially focus on companies that have attracted attention due to incidents. Those with a traceable inventory and a documented policy will not be the most obvious target.
At the same time, it holds true: The Act penalizes not AI usage itself, but uncontrolled AI usage. This is the central lever for a pragmatic governance strategy. Visibility precedes regulation, it doesn’t follow it.
AI Inventory (Week 1-2)
A structured survey of 20 to 30 employees from various departments typically captures 80% of the tools in use. No discovery tool necessary. Anonymity increases honesty and reveals which tools genuinely provide added value. Result: a clear list with tool name, provider, data type, and estimated usage frequency. This forms the basis for every subsequent step.
Light Policy Model (Week 3-4)
No 40-page policy. A single A4 page with three categories: green (approved without restrictions), yellow (usable with data protection requirements), and red (not usable with company data). This ‘traffic light’ system can be developed in two weeks by IT, Legal, and a business representative. Employees receive clear guidance – without prohibitions that only encourage covert behavior.
Companies that rely on complete prohibitions report the same thing: The use of unapproved tools decreases on paper in the short term and increases on smartphones in the medium term. Employees switch to private devices and personal accounts. The governance problem becomes more invisible, not smaller.
Enablement Approach
Prohibition Approach
The most common argument against AI governance in medium-sized companies: no budget, no headcount. Both are understandable and yet no obstacle to the three measures described. The inventory costs three person-days. The traffic light policy costs four to six person-days in development, then one hour per quarter.
This is not a project – it’s a decision that the CIO can initiate in the next leadership meeting. What will be more expensive, however: fulfilling proof obligations from August 2026 without preparation. The three measures are not a substitute for a complete AI strategy. They are the foundation upon which every complete strategy must be built. Visibility, before control is possible.
Most provisions of the EU AI Act become binding from August 2026. High-risk AI systems will then be subject to strict documentation and audit requirements. Prohibited practices such as social scoring were already banned from February 2025. Companies therefore still have a limited lead time to build up inventory and policy.