February 11, 2026
Shadow AI: Risks, Governance and Regulatory Compliance
From covert AI tool usage to trusted AI frameworks: operational strategies for SMEs and large enterprises
The widespread adoption of generative artificial intelligence tools in Italian and European businesses has brought to light the phenomenon of Shadow AI, namely the unauthorized use of AI tools by employees, outside official channels and IT supervision. This phenomenon often stems from real needs for productivity and automation, but it sits in a dangerous gray area for privacy, data security, and compliance with GDPR and the AI Act.
What is Shadow AI?
Shadow AI constitutes a specific subset of the broader phenomenon of shadow IT, with substantial differences. Shadow IT refers to any software, hardware, or technology introduced into a company without approval or control by IT. Shadow AI is its evolution: the use of artificial intelligence tools or applications by employees, without IT approval or supervision. In this case, the risks are amplified because data is not simply “processed,” but often sent to external platforms that can reuse it, store it, or use it to train models. From an operational standpoint, this includes very concrete scenarios: an employee who copies the text of a contract into a chatbot to generate a summary, a data analyst who uploads a CSV with customer data to an online AI analysis service, a technician who uses a public AI assistant to generate or refactor code from internal repositories. Every time these flows occur on unauthorized tools, the company loses visibility on where the data ends up, how long it is stored, and with what guarantees it is processed.
The evolution of the phenomenon: data and trends
As noted by IBM, between 2023 and 2024, the spread of generative AI in companies has grown exponentially: the percentage of employees using it has gone from 74% to 96%, and nearly four out of ten (38%) admit to sharing sensitive data without authorization. Such rapid growth is explained by the ease of access to SaaS services, often perceived as more agile and efficient compared to internal solutions, sometimes considered obsolete or too bureaucratic.
According to a recent report by the Boston Consulting Group (BCG), cited by Corriere della Sera, the situation in Italy reflects the global trend:
-
68% of workers regularly use GenAI tools during the work week — a value in line with Germany and the United Kingdom, but slightly lower than the global average (72%);
-
54% of employees globally state they are willing to use unauthorized AI tools “in the absence of official solutions,” giving rise to shadow AI practices;
-
Only one in three workers has received adequate training, while 85% of executives use GenAI regularly compared to 51% of operational staff.
These data highlight two key aspects: shadow AI is not a marginal phenomenon and does not exclusively concern “technology enthusiasts.” It is now pervasive and cross-cutting, widespread in every role and sector, and emerges at the intersection between a legitimate need — improving efficiency and productivity — and an organizational void, due to the absence of official tools, clear policies, and adequate training paths.
Where shadow AI manifests
Shadow AI emerges where the use of AI appears harmless, infiltrating key business processes.
In customer service, public chatbots handle complex responses or translations by inserting tickets, complaint data, and policies into prompts: the result is data leakage to external platforms and potentially incorrect or misaligned responses, with serious legal and reputational impacts. In data analytics, corporate datasets end up on online AI tools for quick insights, but the opaque reuse of data violates GDPR principles of minimization, purpose limitation, and international transfers. In marketing, GenAI generates copy, A/B tests, social posts, and landing pages fueled by customer segmentations and proprietary strategies: this opens up to data leaks and inconsistencies with brand guidelines and compliance. In IT and development teams, AI assistants produce code, refactors, and scripts without security assessments, exposing intellectual property and introducing vulnerabilities generated by the AI itself.
The common trait is absolute invisibility: while management assumes a governed technological perimeter, a parallel galaxy of tools proliferates, processing sensitive data outside any control.
Regulatory framework
From a regulatory standpoint, shadow AI is a perfect detonator because it sits at the intersection of multiple disciplines: data protection, artificial intelligence regulation, and network and system cybersecurity. On the GDPR level, the use of unauthorized AI tools raises at least four issues:
-
the absence of a documented legal basis for the transfer of data to certain providers or to third countries;
-
the failure to carry out impact assessments (DPIA) for processing that, by nature, scope, and context, would require it;
-
the impossibility of guaranteeing the rights of data subjects (access, deletion, limitation) if the company does not know in which models and systems the data has flowed;
-
the difficulty of demonstrating the adoption of adequate technical and organizational measures.
The penalties provided by GDPR, up to 20 million euros or 4% of global turnover, are not just an abstract number: in the presence of data breaches connected to unauthorized processing, the combination of direct economic damage and penalties can be devastating, particularly for SMEs.
The AI Act introduces additional obligations: classification of AI systems by risk level, stringent requirements for high-risk systems (documentation, logging, human oversight, lifecycle management), transparency and robustness. If parts of business processes are entrusted in a hidden way to uncensused AI tools, the company is unable to say “which AI it is using for what,” nor to demonstrate compliance with specific obligations.
For operators subject to NIS2 or the Cyber Resilience Act, shadow AI finally introduces a new attack surface: uncontrolled APIs, browser extensions that interact with critical systems, external models that read infrastructure data. All elements difficult to protect if they are not even mapped.
Costs, breaches, and business impact
Analyses related to the 2025 Cost of a Data Breach Report highlight how shadow AI is no longer just a theoretical problem, but a measurable economic factor:
-
organizations with high levels of shadow AI recorded on average $670,000 in additional costs per breach compared to those with low or no levels;
-
20% of organizations reported breaches originating from shadow AI incidents;
-
in these incidents, the compromise of personally identifiable information (PII) rises to 65% and that of intellectual property to 40%, values above the global average;
-
97% of organizations that suffered an AI-related breach did not have adequate access controls for artificial intelligence tools.
Several industry analyses estimate that a consistent portion of the most recent data breaches derives precisely from uncontrolled use of AI, with about one-fifth of total breaches in constant year-over-year growth. For SMEs, the consequences translate into penalties and overall costs ranging between 1 and 3 million euros per single incident, including notifications, legal consultations, system restoration, and reputational damage.
These data confirm a key point from a risk management perspective: shadow AI is not just a policy issue, but a direct cost driver. Every ungoverned AI application that comes into contact with sensitive data increases, in a non-linear way, the probability of costly incidents involving precisely the most critical assets: customer data, trade secrets, pricing models, strategies.
From shadow AI to “Trusted AI”
Addressing shadow AI effectively means going beyond the logic of prohibition and building a Trusted AI model: an ecosystem in which the use of artificial intelligence is enabled, but within clear, visible, and verifiable boundaries. Building a trusted AI ecosystem requires an integrated strategy that brings together governance and security: clear policies, defined responsibilities, evaluation criteria for tools, and constant monitoring of use in business processes. From a security perspective, AI Security Posture Management (AI‑SPM) tools allow mapping of models, APIs, and data pipelines, identifying unauthorized uses or risky configurations before they become vulnerabilities.
An effective framework typically includes:
-
mapping of ongoing AI use cases (official and unofficial) through internal surveys, network logs, and endpoint analysis;
-
catalog of approved tools with clear guidelines on what can be shared (e.g., never personal data or IP on public tools);
-
technical guardrails: DLP, CASB, browser extension control, centralized logging;
-
structured evaluation of AI vendors on GDPR, AI Act, data localization, and contractual clauses on training and reuse;
-
continuous training oriented not only to usage skills, but to risk awareness for those handling sensitive data.
An often underestimated element is dialogue: if IT simply bans popular tools without offering alternatives, shadow AI simply moves deeper. The most effective experiences replace prohibition with structured solutions: evaluated tools, controlled environments, and safe usage methods co-designed with teams.
Bridging the gap
BCG data highlights a critical paradox: 85% of executives use GenAI regularly, while only 51% of operational staff — who manage the processes most exposed to sensitive data — have access to the same tools and training.
To reduce shadow AI, a policy via email is not enough. A structured AI literacy program is needed on three pillars: technical understanding (model limitations, bias, hallucinations), regulatory awareness (GDPR, AI Act, data processing), organizational accountability (internal channels to propose new use cases and report risks). The goal is not to suppress initiative, but to channel it: employees must feel incentivized to bring useful tools to light to evaluate them together with IT, security, and legal, rather than hide them. A sign of organizational maturity is when employees not only use tools autonomously, but actively propose them for shared evaluation.
Toward a conscious strategy
Shadow AI is the symptom of a transformation already underway: workers have integrated artificial intelligence into their daily work, often before the organization was ready. Data from IBM, BCG, and major observers converge: ignoring this reality does not eliminate it, but makes it riskier and more costly. For Italian companies — particularly SMEs — the immediate challenge is not to stop AI, but to bring it back into a perimeter of trust in which tools, data, and processes are visible, governed, and compliant. This is the difference between a shadow AI that erodes value in the shadows and a trusted AI that becomes a lever for competitiveness, innovation, and resilience within the European regulatory framework.

Marta Magnini
Digital Marketing & Communication Assistant at Aidia, graduated in Communication Sciences and passionate about performing arts.
At Aidia, we develop AI-based software solutions, NLP solutions, Big Data Analytics, and Data Science. Innovative solutions to optimize processes and streamline workflows. To learn more, contact us or send an email to info@aidia.it.



