Insights

Oct 16, 2025

New spaces on the Rhine for a dialogue between art and AI

Generative AI is booming – but often hidden

AI tools like ChatGPT or Gemini have gained millions of users in a very short time. In companies, a phenomenon is growing that experts call "Shadow AI": unauthorized, uncontrolled use of AI tools, mostly without the knowledge or approval of the IT department.

The dimension in numbers:

1.8 billion monthly accesses to GenAI systems worldwide500–600 million daily users20–24 million in Germany access at least once a month68% of employees use AI systems in unauthorized ways7 out of 10 use GenAI in a professional context without the employer's knowledge57% regularly enter sensitive data into external AI services

From opportunity to risk

The parallels to "Shadow IT" are obvious. What initially enables productivity and innovation leads, when used uncontrolled, to significant dangers: data protection violations, loss of trade secrets, compliance issues, and total loss of control over company-relevant information.

The new phenomenon: Bring Your Own AI (BYOAI)

Closely linked with Shadow AI is the trend "Bring Your Own AI" (BYOAI). Employees use private or freely available AI tools for professional tasks – often out of convenience, time pressure, or due to a lack of official corporate solutions.

The risks

Companies have no control over what data is inputted. Private accounts are not subject to the company's security and compliance standards. Data can be stored on foreign servers or used for training purposes. BYOAI is thus not only a convenience solution for employees but, in many cases, the direct path through which Shadow AI arises in the company.

Warnings from business and research

As early as mid-2024, the international consulting firm Gartner classified "Shadow AI" as one of the five biggest organizational risks in digital transformation – on par with issues like cybersecurity and regulatory non-compliance.

Gartner specifically warned about:

Uncontrolled data flow: Business and customer data end up in external environments, often without the possibility of deletion. Loss of traceability: AI-based decisions and content are neither documented nor verifiable. Legal risks: Violations of data protection laws such as the GDPR can lead to high fines.

McKinsey and PwC deepened the topic with practical case studies:

A medium-sized manufacturing company lost confidential manufacturing plans after an employee entered them into a public AI tool. Parts of the data later appeared in training contexts of other providers. An international consulting firm used unchecked AI-generated figures in client reports – the correction and reputational efforts cost over 1.2 million euros. An authority uploaded internal documents for application procedures into a freely accessible AI system; data protection violations led to lengthy proceedings and loss of trust. These examples demonstrate: Shadow AI is not a theoretical risk but a lived reality – with follow-up costs that can quickly run into the millions.

Why this topic must be addressed now

Modern AI marks a historic turning point. The economic potential for Germany is estimated at up to 330 billion euros annually (IW Köln). However, without clear internal guidelines, secure platforms, and transparent processes, this progress threatens to become a danger for companies, authorities, and the economy as a whole. "Germany is facing the steam engine moment of the digital era. Artificial intelligence can create a new economic miracle. But currently, millions of employees use AI tools like ChatGPT without the knowledge of their employers, even with sensitive data. Shadow AI and the trend 'Bring Your Own AI' jeopardize data protection, compliance, and the core of corporate value creation. If we want to harness the opportunities of AI, we must now integrate it securely, transparently, and data-sovereign into daily business operations."

The government advises companies on what they should do now:

1. Formulate usage guidelines: Define which AI tools are permitted and how to handle sensitive data.

2. Create secure AI access: Implement authorized platforms that ensure data protection and compliance.

3. Awareness: Train employees and educate them about risks.

4. Establish monitoring: Monitor the use of external AI tools to prevent abuse.

From danger to opportunity

Shadow AI and BYOAI are not marginal issues for the IT department – they affect the entire organization. The unauthorized use of private or freely available AI tools may seem convenient in the short term, but it can lead to serious long-term data protection, compliance, and security problems. However, those who define clear rules of the game, create secure and authorized AI accesses, and raise employee awareness can minimize risks and fully exploit the advantages of generative AI. The foundation for this is a secure, reliable, and scalable AI platform like the neuland.ai HUB. Thus, a current danger becomes a sustainable opportunity for innovation, productivity enhancement, and competitive advantages – without losing control and trustworthiness.