Menu di accessibilità (premi Invio per aprire)

1 dicembre 2025

Whistleblower for the AI Act: the new European tool

Technical and regulatory analysis of the new European reporting channel for AI Act violations with a focus on companies and Italian regulations

On November 24, 2025, the European Union officially launched a new digital tool dedicated to whistleblowers, aimed at facilitating the reporting of violations of the AI Act Regulation related to artificial intelligence. This platform represents a fundamental element for monitoring and enforcing European legislation, offering secure and confidential channels for reporting non-compliant or risky practices in the field of AI systems.


European and Italian regulatory framework on AI

The AI Act (EU Regulation 2024/1689) constitutes the first European regulatory body that defines a uniform legal framework for artificial intelligence, with the objective of regulating its uses, risks and imposing compliance obligations. The regulation places particular attention on transparency, safety and the protection of fundamental rights, imposing strict requirements on market operators in terms of risk assessment and AI systems governance. In Italy, European provisions are integrated by Law 132/2025, which came into force on October 10, 2025. This law assigns specific supervisory and control tasks to authorities such as the Agency for Digital Italy (AgID) and the National Cybersecurity Agency (ACN). This legislation represents an important step forward in the Italian regulatory landscape, positioning the country among the first nations to adopt a systemic and proactive approach in managing AI deployment.

Risk classification

One of the key innovations of the regulation is the classification of AI systems based on associated risk, structured into four main categories:

  • Unacceptable risk AI systems: are prohibited because they are considered a direct threat to people’s rights or safety. These include systems that use harmful behavioral manipulation, social scoring, remote biometric recognition for control in public spaces, emotion recognition systems in workplaces or schools.

  • High-risk AI systems: these systems can significantly affect critical aspects such as essential infrastructure, education, employment management, justice, immigration and biometric identification. They are subject to strict compliance obligations, including rigorous risk assessment, data quality assurance, detailed documentation, human oversight during use and technical robustness.

  • Limited risk AI systems: include systems that require specific communication obligations to the end user, for example chatbots and automatic content generation systems. The obligations require clear identification of generated content, such as deepfakes or synthetic texts, to maintain trust and prevent misleading information.

  • Minimal or no risk AI systems: the majority of current AI systems fall into this category and are subject to lighter regulations, with little or no impact on fundamental rights.

Classification of artificial intelligence systems based on the risk levels defined by the AI Act: minimal, transparency, high and unacceptable, with practical examples for businesses and regulatory compliance.

Compliance obligations for high-risk AI systems

For systems classified as high-risk, the AI Act provides an articulated compliance system, which includes:

  • Continuous assessment and systematic risk management throughout the system’s entire lifecycle.

  • Maintenance of exhaustive technical documentation, which facilitates compliance assessment by competent authorities.

  • Ensuring adequate human oversight measures to prevent incorrect or prejudicial automated decisions.

  • Implementation of robustness, cybersecurity and system accuracy measures.

  • Structured and timely management of post-market incidents and malfunctions.

  • Transparency and clear communication to end users about the use and characteristics of the AI system.

Governance and regulation enforcement

The envisioned governance model is multilevel: the European AI Office is responsible for coordinating supervisory activities within a common framework, supporting and working in collaboration with member states’ national authorities. This approach ensures uniformity in the application of regulations and allows for timely intervention in case of non-compliance or identified critical issues.

Rules for General Purpose AI (GPAI) models

The AI Act pays particular attention to general purpose AI models, such as those used in generating texts, images, codes or sounds, which may pose systemic risks due to their capacity and widespread use. Since August 2, 2025, specific provisions regulating their offering and use have been in force, imposing transparency, data processing guarantees and copyright management, in line with responsible innovation principles.


Technical features of the AI Act whistleblower tool

The whistleblower tool is a digital platform implemented by the European AI Office, designed to ensure:

  • The ability to report violations of the AI Act Regulation in a secure, confidential and anonymous manner.

  • The acceptance of reports in all official languages of the EU, with uploading of supporting documentation in various formats.

  • The use of certified encryption technologies to protect the identity of informants and the confidentiality of communications.

  • An integrated mailbox for bidirectional dialogue between informant and authorities, which allows updates on the report’s progress without revealing the reporter’s identity. These features respond to the need to ensure an efficient channel that complies with privacy regulations, enhancing the active participation of individuals and organizations in the correct application of the regulation.


Scope of application and admissible reports

The tool is primarily aimed at providers of general purpose AI models and operators of AI systems classified as high-risk according to the AI Act Regulation. However, it is open to all interested parties who wish to report practices that may compromise aspects such as:

  • The safety and reliability of AI-based products.

  • The protection of privacy and personal data.

  • Respect for fundamental rights, including non-discrimination.

  • Compliance with applicable legal requirements in different sectors of use.

In Italy, this platform integrates the control mechanisms established by national authorities, facilitating an articulated and multilevel governance system that responds to the needs of a rapidly evolving market.


Role of the whistleblower tool in the Italian AI market

The Italian AI market in 2025 records significant growth, with an estimated value of over 1.2 billion euros in the last year and an increasing adoption rate especially among large companies. In this context, implementing an efficient whistleblowing system represents a key element in supporting this development, offering greater security and reliability in innovation processes. Italian companies, engaged in integrating AI systems both in industrial and service sectors, are called to comply with stricter standards, and the tool helps to promptly highlight situations of non-compliance, enabling rapid corrective interventions and supporting the creation of a more transparent and competitive market environment.

However, for this mechanism to function effectively, a crucial aspect concerns the protection of informants. European legislation provides formal legal protection against possible retaliation starting from August 2, 2026, while before that date protection is mainly based on confidentiality and anonymity guaranteed by the platform’s technical and procedural solutions. In Italy, where whistleblowing culture is progressively establishing itself, the application of these guarantees takes on particular importance and the integration between the AI Act Regulation and national legislation promotes a solid reference framework, helping to incentivize stakeholder participation in preventive and continuous AI systems oversight.

Overall, the launch of the AI Act whistleblower tool represents significant progress in regulatory control and ethical governance of artificial intelligence at the European level, with relevant effects for Italy. The platform is configured as an essential technical-regulatory tool to ensure that AI development and adoption occur in respect of fundamental rights and safety and, for Italian companies, the introduction of this new reporting method means integrating more effective compliance processes, improving transparency and contributing to a more reliable digital ecosystem. In the rapidly changing regulatory and technological context, this tool facilitates a proactive and coordinated approach between institutions, companies and citizens.


Sources:

Marta Magnini

Marta Magnini

Digital Marketing & Communication Assistant at Aidia, graduated in Communication Sciences and passionate about performing arts.

Aidia

At Aidia, we develop AI-based software solutions, NLP solutions, Big Data Analytics, and Data Science. Innovative solutions to optimize processes and streamline workflows. To learn more, contact us or send an email to info@aidia.it.

Digital Omnibus: What Changes for G...