10 settembre 2025

Public Generative AI Potential and Risks

How to leverage GenAI without compromising privacy and security

Artificial Intelligence is revolutionizing the world of work, offering companies innovative tools to improve productivity, efficiency, and competitiveness.

According to research from the POLIMI Artificial Intelligence Observatory, the Artificial Intelligence market in Italy has reached 1.2 billion euros.

Growth is driven by generative AI (43% of total value). The use of this technology is evident everywhere, especially in the workplace: 9 out of 10 executives state that their employees use AI daily. Furthermore, 4 out of 10 assert that those who do so are overall more productive (“Growing Jobs 2025” research).

Generative AI therefore represents an enormous opportunity for Italian companies, improving productivity and efficiency. However, its usage must be regulated and secure to avoid security risks and regulatory compliance issues.

Adopting chatbots and virtual assistants that operate within the corporate perimeter is the ideal solution to leverage AI without compromising data privacy and security.


How Generative AI Streamlines Daily Work

Generative AI applications in business are numerous. Here are some:

  • Customer response automation: chatbots can handle support requests, improving customer service and reducing response times.

  • Content creation: AI can support marketing and other departments in generating texts, emails, social posts, product descriptions, and much more.

  • Data analysis: advanced tools can process enormous amounts of information, identifying market trends, suggesting data-driven strategies, and generating reports or other structured materials.

  • Human resources support: AI can facilitate personnel selection, simplify document management, and improve internal communication.

  • Software programming support: automatic code generation allows developers to accelerate the development of new software solutions.

  • User experience personalization: thanks to generative AI, companies can create personalized experiences for their customers, adapting products and services to user preferences.

  • Legal support and regulatory compliance: AI-based virtual assistants can help legal teams analyze contractual documents, verify regulatory compliance, and manage legal deadlines, reducing the risk of errors and optimizing work times.

Given all these benefits and positive aspects - what are the problems and critical issues related to using widespread Generative AI?


Using public generative AI: what are the risks?

According to a Federprivacy report, 89% of applications and public generative AI tools used by employees are not controlled by companies.

20% of business users have independently installed at least one AI extension in their browser, and 58% of these extensions have high-risk access permissions, such as the ability to monitor browsing, read web page content, and access sensitive data. Additionally, 5.6% of extensions can expose companies to privacy violations and data breaches.

Another significant problem is that 18% of users directly paste corporate information into commonly known GenAI tools. Such oversights carry the risk that data may be stored or handled by third parties. The company’s sensitive data is thus exposed.

There is the possibility that confidential information, shared with common chatbots, could appear to other users asking questions on similar topics. Corporate security and compliance with data protection regulations, such as GDPR, are therefore not guaranteed. The privacy of information shared by users remains an unknown.


The problems of sharing data with generalist AI

The indiscriminate use of external AI tools exposes companies to various dangers:

  • Privacy Violations

When corporate data is handled by external generative AI tools, it is often stored on third-party servers, outside the company’s direct control. Corporate data, including customer data, risks being exposed to vulnerabilities. Furthermore, poor transparency in data processing by AI providers can lead to situations where companies cannot adequately respond to requests for data access, correction, or deletion.

  • Competitive Exposure

Using external generative AI tools without proper control can result in the loss of confidential information. This increases the risk that sensitive information, such as business strategies, customer data, or proprietary code strings, ends up in the hands of competitors or third parties. Competitive position may be compromised, as well as corporate reputation.

  • Cybersecurity Risks

The integration of unverified generative AI tools within corporate systems can represent an open door for cyberattacks and malware. In particular, integrating public chatbots could create vulnerabilities in the corporate network, exposing the company to phishing attacks and computer viruses.

  • Poor Customization

External generative AI tools are not designed to operate in a specific corporate context. If not adequately trained, they offer mostly inaccurate results or inadequate ones regarding the needs of the corporate ecosystem. The reliability of these tools is often compromised by their generalization.

  • Integration Problems

Public AI systems poorly integrate with all existing corporate applications. The tools available to business decision-makers and employees therefore remain disconnected from each other, preventing true intersection between systems.

  • External Vendor Dependence

Companies that rely on external generative AI tools risk becoming dependent on distant and unreachable third-party providers. Any issues related to concrete operations or usage conditions often don’t find adequate answers.

  • Legal Risks

If a company uses tools that don’t comply with data protection regulations or that violate intellectual property rights, it could incur legal liability, affecting customer and partner trust.

These problems underscore the importance of developing a corporate strategy that allows leveraging the benefits of generative AI while eliminating associated risks.

Companies must be able to implement secure and controlled AI solutions that operate within corporate boundaries, ensuring full security and supervision by a team of experts.


The solution to public AI risks: the AVA enterprise assistant

At Aidia we have developed AVA (Aidia Virtual Assistant): the virtual agent that allows company operators to move quickly through daily documents and automate many repetitive activities. The product represents a true guide, capable of immediately finding the right information.

AVA brings the benefits of a secure AI solution that can be shaped to corporate specificities.

Specifically, AVA guarantees:

  • Data confidentiality: none of the information shared with the chatbot will ever be viewed by another user performing a similar search. AVA can be implemented on internal servers or secure clouds. In both cases, GDPR regulations are respected and no data is transmitted to third parties.

  • Human support: behind the technology is the immediate availability of a dedicated team ready to quickly resolve any customer doubts.

  • Guaranteed training: companies using AVA are guaranteed the configuration phase (deployment) by our technicians, without having to proceed alone through long training periods.

To learn more about AVA, you can request a personalized and free demo. Contact us


SOURCES:

  • Politecnico di Milano Artificial Intelligence Observatory, 2024 Edition.
  • “LinkedIn and Humangest: «Young people will change positions often and AI increases revenue”, Il Sole 24 Ore
  • “Artificial intelligence alarm in companies: 89% of apps and tools used by employees are out of control”, FederPrivacy
Veronica Remitti

Veronica Remitti

Executive & Marketing Assistant at Aidia, graduated in Public and Political Communication Strategies, lover of nature and everything that can be narrated.

Aidia

At Aidia, we develop AI-based software solutions, NLP solutions, Big Data Analytics, and Data Science. Innovative solutions to optimize processes and streamline workflows. To learn more, contact us or send an email to info@aidia.it.

The first two years of Aidia
Article 16 of 33