April 2, 2026
AI and Cybersecurity 2026: Preventive Security for Italian Companies
AI-powered attacks up 47%, governance absent in 80% of Italian companies: what changes in 2026 and how to build a defense that holds over time.
Starting from a concrete data point: in 2025, according to IBM, the global average cost of a breach dropped to $4.44 million, for the first time in five years. It seems like good news, and in part it is. But in the United States, in the same year, that same indicator reached an all-time high of $10.22 million. The gap tells us something important: those who equipped themselves are beginning to see results; those who fell behind are paying an increasingly higher bill. At GoBeyond in Milan, this past February, Barbara Poli of Grandi Navi Veloci said something simple and precise: “A preventive, not reactive, approach to security with AI.” The following months have only added numbers to that statement.
The threat landscape has changed structurally
Cyber attacks powered by AI grew by 47% in 2025, according to aggregate analyses including the Verizon DBIR and DeepStrike research. This is not an abstract percentage: behind it lies a qualitative change in how attacks are built and executed. The barrier to entry for cybercrime has lowered drastically. No coding required, no advanced technical skills needed, not even speaking the victim’s language. Tools like WormGPT (defined as the “evil twin” of ChatGPT, it is a malicious generative AI tool) automate significant portions of the offensive cycle, from reconnaissance to delivery, generating variants sufficient to evade signature-based detection systems. The phishing email that arrives in your inbox knows your name, your role, the last project you worked on.
Speed is the other variable that changes everything. A case documented by the CISO panel at Harvard Extension School tells of a company that lost over $25 million in less than thirty minutes. No conventional incident response process is designed to operate on that timescale. When the notification arrives, it is already over.
Italy: the problem is not technology
If the global picture is concerning, the Italian one has a specific characteristic that makes it even more delicate. It’s not that Italian companies don’t adopt AI—they do, and often quickly. The problem is what they build, or don’t build, around it. Research by TrendA on 100 decision makers at Italian companies with over 250 employees is quite unequivocal: 60% have approved AI projects despite explicit concerns about security. Not out of negligence, but competitive pressure. The fear of falling behind is stronger than the fear of risk. It is an understandable dynamic, but it leads to building on unstable foundations.
The numbers that follow complete the picture. Only 20% of Italian companies have complete and operational AI policies. 57% of the sample believe AI is already surpassing their defensive capabilities—so they know they are behind, but continue anyway. 41% cite lack of clear regulation as the main obstacle: many wait for rules to come from the outside, instead of building internal governance now.
As for the industrial context, the manufacturing sector is the most affected globally, with 25.7% of total attacks according to the IBM X-Force Threat Intelligence Index. For a country where manufacturing is still the backbone of the productive system, this is not background data.
The four threats to really understand
Not all threats are equal, and confusing them leads to poorly calibrated defenses. It is worth distinguishing them.
AI-Generated Phishing
Emails built by language models achieve open rates of 72%, nearly double compared to traditional phishing. There is a simple reason behind this data: they are trained on victims’ public profiles and replicate tone, context, and urgency with precision. The Verizon DBIR 2025 confirms that social engineering remains the primary attack vector; AI has simply made it much more scalable.
Deepfake Identity Fraud
In 2025, deepfake fraud caused $1.28 billion in documented losses across 1,567 verified incidents, according to the 2025 Deepfake Threat Report by Resemble AI, and this figure excludes reputational damages and those never reported. The Deloitte Center for Financial Services estimates GenAI fraud losses will reach $40 billion by 2027, with an annual growth rate of 32%. In a corporate context, the recurring scheme is a video call from the “CEO,” with synthetic face and voice, indistinguishable from the original, instructing the finance department to execute an urgent wire transfer. People tend to trust a voice or face they recognize. When that trust becomes an attack vector, traditional verification procedures are no longer sufficient.
Automated Vulnerability Discovery
41% of zero-day exploits in 2025 were identified through AI-assisted reverse engineering. But this figure underestimates the real change: today AI tools do not wait for a vulnerability to be discovered; they actively search for it, continuously and on an industrial scale. They scan exposed cloud endpoints, public GitHub repositories, misconfigured APIs. When they find leaked credentials or missing patches, they do not report them—they exploit them. The operational consequence is direct: the time separating vulnerability discovery from its exploitation has shrunk from days to hours. For organizations without disciplined patch and configuration management, it is no longer a matter of if, but when.
Shadow AI
Shadow AI is the threat that organizations tend to underestimate because it does not come from the outside. Employees use unapproved AI tools, to write, analyze data, automate tasks—often without anyone being aware. As experts from Harvard Extension School highlighted, in large companies with hundreds of thousands of assets, a significant quota of resources is in fact unmonitored, many with embedded AI. Every ungoverned tool is a potential exfiltration channel. To delve deeper into this topic, see the dedicated article “Shadow AI: Risks, Governance, and Regulatory Compliance”.
From principle to practice: what to do
The paradigm shift invoked at GoBeyond has concrete operational implications. AI-powered XDR systems have reduced incident response times by 44% in 2025. IBM estimates that organizations using AI and automation in their defenses save an average of $1.9 million per breach compared to those who do not. Additionally, companies that detect a breach within 200 days spend $1.88 million less than those who take longer. The numbers support the investment. But technology alone is not enough. Three principles emerge clearly from all the material analyzed.
AI governance cannot be delegated. Relying on a third-party vendor does not shift the responsibility. David Cass, CISO and instructor at Harvard Extension School, says it bluntly: “AI cannot operate as a black box. Responsibility always falls on the organization that adopted it.” Demanding transparency from providers is not optional, but a condition of engagement.
The CISO must have a voice on the board. Matteo Macina of TIM at GoBeyond put his finger on a cultural issue before an organizational one: as long as security is perceived as a cost to minimize, organizations remain structurally exposed. A CISO with real mandate and access to top management is not a privilege of large corporations. It is a necessity for anyone handling sensitive data or critical processes.
The friction between CIO and CISO serves. Giacomo Morelli of Enegan, who at GoBeyond recounted covering both roles, raised a question that applies to many growing organizations: does it make sense to concentrate innovation and security in the same person? The answer that emerged, and that the data confirm, is no—at least beyond a certain complexity threshold. The two functions have structurally conflicting logics: the CIO pushes toward openness, the CISO toward control. That tension is not a problem to eliminate. It is a mechanism that, managed well, raises the quality of both functions.
The point that changes the perspective
IBM found that organizations with AI-native defenses detect breaches 80 days earlier than those without. But even among these, a non-marginal quota suffered violations in 2025. Which leads to a conclusion worth formulating explicitly: technology is necessary, not sufficient.
Security in 2026 works as a system: technology, governance, culture, training, decision-making architecture. Removing one element weakens all the others. For those building or consolidating their AI journey, the right question is not “do we have a security tool,” but “do we have a governance structure that treats security as a constitutive dimension, not as an added layer.”
Building today what will be needed tomorrow
The competitive advantage in the coming years will not go to who adopted more Artificial Intelligence. It will go to who adopted it in a way that holds over time, with solid governance, clear roles, controlled data. Organizations that start building on these foundations now are not just reducing risk. They are accumulating a structural advantage difficult to recover for those waiting for someone to tell them when the right moment is.
AI that creates lasting value is that which integrates into existing infrastructure without taking control away from the company—over data, security, decisions. Aidia designs tailor-made solutions for the Italian enterprise market with exactly this principle: AI systems that work inside your infrastructure, with your data, according to your rules. Learn more.
Sources:
-
AI4Business, Roberto Cosentino, AI trends report “AI in companies: in Italy 60% of managers approve projects despite security doubts” (March 31, 2026) https://www.ai4business.it/intelligenza-artificiale/ai-in-azienda-in-italia-il-60-dei-manager-approva-progetti-anche-con-dubbi-sulla-sicurezza/
-
Harvard Extension School, CISO panel “AI and the Future of Cybersecurity” https://extension.harvard.edu/blog/ai-and-the-future-of-cybersecurity/
-
IBM, “X-Force Threat Intelligence Index 2025” (April 2025) https://www.ibm.com/it-it/reports/threat-intelligence
-
IBM Security, “Cost of a Data Breach Report 2025” (July 2025) https://it.newsroom.ibm.com/cost-of-data-breach-2025
-
Verizon, “2025 Data Breach Investigations Report” (2025) https://www.verizon.com/business/resources/reports/dbir/
-
DeepStrike, “AI Cybersecurity Threats 2025” (updated March 30, 2026) https://deepstrike.io/blog/ai-cybersecurity-threats-2025
-
Industria Italiana, “60% of Italian leaders approve AI projects despite security risks: the new TrendAI research” (March 31, 2026) https://www.industriaitaliana.it/ricerca-di-trendai-leader-italiani-progetti-ai-rischi-sicurezza/
-
Resemble AI, “2025 Deepfake Threat Report” (2026) https://www.resemble.ai/2025-deepfake-threat-report/

Marta Magnini
Digital Marketing & Communication Assistant at Aidia, graduated in Communication Sciences and passionate about performing arts.
At Aidia, we develop AI-based software solutions, NLP solutions, Big Data Analytics, and Data Science. Innovative solutions to optimize processes and streamline workflows. To learn more, contact us or send an email to info@aidia.it.
Latest news

April 8, 2026
Artificial Intelligence: What it really Is — and what it isn’t

April 2, 2026
AI and Cybersecurity 2026: Preventive Security for Italian Companies

March 25, 2026
Digital Architectures in Manufacturing: from ISA-95 to IT/OT Convergence
