03.05.2026

8 Min. Read

Seven out of ten employees in German companies use AI tools that their IT department has never approved. Company secrets end up in external training sets, liability issues remain open, and from August 2026, the EU AI Act will turn a governance oversight into a compliance risk with penalties of up to 35 million Euro. Three measures immediately create visibility – without a major project, without additional budget.

Key Takeaways

  • 78% of employees use unapproved AI tools – only 14% of companies have clearly defined governance responsibilities
  • EU AI Act binding from August 2026: Penalties up to 35 Mio. EUR or 7% of global annual turnover
  • Three immediate measures: AI inventory, policy-light model, quarterly review rhythm
  • Enablement beats prohibitions: Involving employees creates visibility instead of hidden behavior

What is Shadow AI? Shadow AI refers to the use of AI tools by employees without the knowledge or approval of the IT department – analogous to Shadow IT, but with significantly higher data and liability risks, because many systems can process entered content for training.

The Problem Has Long Reached a Significant Scale

The Logicalis CIO Report 2024 delivers a sobering figure: 62% of surveyed CIOs admit to already making compromises in AI governance. Not because they don’t see the problem, but because daily operations move faster than any governance initiative. Meanwhile, employees have made their own decision: They are not waiting.

ChatGPT, Claude, Gemini, Perplexity, Copilot variants from the App Store – the list of unapproved tools running daily in companies is growing faster than any inventory tool can track. Current market data shows: 78% of knowledge workers use at least one AI tool that their IT department has never seen.

The real danger does not lie in the tool itself. It lies in what employees input: contract excerpts, customer data, internal analyses, strategy documents. If an employee uploads a contract draft to a consumer AI tool to have it summarized, this content may end up in the training set of the next model or on servers outside the EU.

78%

Why August 2026 is a Hard Deadline

The EU AI Act introduces binding obligations starting August 2026. AI systems falling into high-risk areas – HR decisions, credit scoring, critical infrastructure – are subject to stringent documentation and audit requirements. Anyone without an inventory of their AI usage by then will be unable to demonstrate which systems fall into which risk category.

The penalties are not theoretical: up to 35 million Euro or 7% of global annual turnover. For a company with 500 million Euro in revenue, that would be 35 million Euro. Supervisory authorities will initially focus on companies that have attracted attention due to incidents. Those with a traceable inventory and a documented policy will not be the most obvious target.

At the same time, it holds true: The Act penalizes not AI usage itself, but uncontrolled AI usage. This is the central lever for a pragmatic governance strategy. Visibility precedes regulation, it doesn’t follow it.

Three Immediate Actions

1

AI Inventory (Week 1-2)

A structured survey of 20 to 30 employees from various departments typically captures 80% of the tools in use. No discovery tool necessary. Anonymity increases honesty and reveals which tools genuinely provide added value. Result: a clear list with tool name, provider, data type, and estimated usage frequency. This forms the basis for every subsequent step.

2

Light Policy Model (Week 3-4)

No 40-page policy. A single A4 page with three categories: green (approved without restrictions), yellow (usable with data protection requirements), and red (not usable with company data). This ‘traffic light’ system can be developed in two weeks by IT, Legal, and a business representative. Employees receive clear guidance – without prohibitions that only encourage covert behavior.

Prohibitions vs. Enablement: What Practice Shows

Companies that rely on complete prohibitions report the same thing: The use of unapproved tools decreases on paper in the short term and increases on smartphones in the medium term. Employees switch to private devices and personal accounts. The governance problem becomes more invisible, not smaller.

Enablement Approach

  • Employees remain visible in approved channels
  • Productivity gains become documentable
  • A governance culture emerges through participation
  • AI Act compliance demonstrable via inventory

Prohibition Approach

  • Usage shifts to private devices
  • Governance becomes invisible, not smaller
  • Employee frustration increases, productivity advantage is lost
  • No inventory – AI Act compliance barely possible

What a Realistic Budget Means

The most common argument against AI governance in medium-sized companies: no budget, no headcount. Both are understandable and yet no obstacle to the three measures described. The inventory costs three person-days. The traffic light policy costs four to six person-days in development, then one hour per quarter.

This is not a project – it’s a decision that the CIO can initiate in the next leadership meeting. What will be more expensive, however: fulfilling proof obligations from August 2026 without preparation. The three measures are not a substitute for a complete AI strategy. They are the foundation upon which every complete strategy must be built. Visibility, before control is possible.

Frequently Asked Questions

When does the EU AI Act become binding for German companies?

Most provisions of the EU AI Act become binding from August 2026. High-risk AI systems will then be subject to strict documentation and audit requirements. Prohibited practices such as social scoring were already banned from February 2025. Companies therefore still have a limited lead time to build up inventory and policy.

Which AI tools are considered high-risk under the EU AI Act?

Also available in

More Articles

03.05.2026

Smart City Governance 2026: What CIOs Can Learn from Germany’s City Digital‑Infrastructure Lag

Tobias Massow

7 Min. Reading Time The MDPI Smart City Maturity Study 2026 examined 1.136 German municipalities. The ...

Read Article
03.05.2026

DACH Data Strategy 2026: Why IT Budgets Are Shifting from Front‑End Innovation to Back‑End Reliability

Tobias Massow

7 Min. Reading time DACH IT budgets are shifting in 2026 – not toward new AI tools, but toward data ...

Read Article
03.05.2026

NVIDIA Agent Toolkit with SAP, Salesforce, and CrowdStrike: What 17 Enterprise Partners Mean for CIOs in AI Vendor Decisions by 2026

Eva Mickler

8 Min. Reading Time NVIDIA announced the Agent Toolkit at GTC 2026 and simultaneously presented 17 enterprise ...

Read Article
03.05.2026

Industry 5.0 as a Leadership Decision: Key Takeaways for CIOs from Hannover Messe 2026

Eva Mickler

8 Min. Read Time Hannover Messe 2026 has produced a message that will end up as a keynote slide in most ...

Read Article
03.05.2026

Shadow AI: What a Realistic AI Governance Framework Looks Like

Angelika Beierlein

8 Min. Read Seven out of ten employees in German companies use AI tools that their IT department has ...

Read Article
03.05.2026

From Operator to Orchestrator: What the Deloitte 2026 Study Means for DACH Executives Evaluating Their Tech Leadership

Angelika Beierlein

6 Min. read The Deloitte Global Technology Leadership Study 2026 – surveying 660 tech leaders worldwide, ...

Read Article
A magazine by Evernine Media GmbH