January 13, 2026
Agentic AI and governance
How to prepare processes, data, and compliance in the post AI Act era
Artificial intelligence is going through a phase of profound transformation: from isolated generative models, we are rapidly moving towards agentic systems capable not only of responding but of acting, coordinating tools, accessing data and making operational decisions within defined limits. Mark Minevich, AI strategist, defines this epochal shift: “In 2026, artificial intelligence will move with us as a constant co-worker, while multi-agent systems will manage entire workflows once controlled by humans”. In this scenario, with the AI Act and the Digital Omnibus redesigning boundaries, responsibilities and obligations for companies and European public administration, the key question is no longer “what is the best model?”, but “how do I prepare my processes, my data and my governance for agentic artificial intelligence?”. Gartner predicts that over 40% of projects in agentic artificial intelligence will be canceled by the end of 2027 due to rising costs, uncertain business value and inadequate risk controls.
From 2026, artificial intelligence as a daily operational partner
Predictions for 2026 outline a scenario of massive acceleration, but also of Darwinian selection among AI projects. According to Minevich, “the winners will not be the ‘AI adopters’, but those who learn to treat AI as an equal colleague”. Every employee will have a dedicated AI assistant for HR tasks, operations and real-time performance guidance, not simple chatbots for FAQs but digital collaborators integrated into workflows. Hiring and promotions will increasingly be based on AI literacy, automation skills and intuition in workflow design, with 30% of large enterprises launching AI fluency training programs according to Forrester’s predictions. Multi-agent systems composed of numerous specialized agents will collaborate on supply chain optimization, R&D pipelines and patient care pathways, but Forrester also warns of a risk of serious agentic breaches in the absence of adequate orchestration.
Strategic shift: from commoditized models to agentic orchestras
Over the past two years, many organizations have experimented with chatbots, virtual assistants and content generation use cases, often confined to POCs or innovation initiatives poorly integrated into main flows. Agentic artificial intelligence represents a paradigm shift: no longer a model that produces texts or suggestions, but sets of software agents that observe a context, decide which actions to take and execute them on real systems – CRM, ERP, ticketing systems, document repositories, third-party APIs – automatically collecting information for audit reports, analyzing and routing whistleblowing reports, monitoring logs and events to identify anomalous patterns and managing operational activities in near real-time.
In parallel, the model market is experiencing progressive commoditization: large language models, both open source and proprietary, are increasingly accessible, interoperable and difficult to distinguish in daily practice, and the real competitive difference no longer lies in having the largest model, but in knowing how to transform it into integrated agentic tools with core systems, company data and operational objectives. For those leading complex organizations, small and medium-sized regulated enterprises, industrial groups and public administration, this means:
-
designing operational chains and architectures in which the model is just a module and not the center of the strategy;
-
connecting artificial intelligence to reliable, updated and governed data sources avoiding shadow artificial intelligence and uncontrolled initiatives;
-
measuring impact in terms of closed cases, mitigated risks and reduced times, not just in accuracy or text quality.
Value shifts from the “model that can do many things” to the ability to build vertical tools, specialized agents and intelligent operational flows that deliver measurable results on concrete problems such as regulatory compliance, risk management, IT governance, reporting processes and data security.
Physical AI: from experimentation to production in manufacturing
Skilled labor shortages have become structural: experienced technicians and operators are scarce globally and, in 2026, manufacturers will turn to artificial intelligence not only to save costs but to survive. AI will empower skilled workers by automating repetitive tasks, improving safety and optimizing the supply chain; Nvidia speaks of the “era of Physical Artificial Intelligence”, in which humanoid robots like Tesla Optimus, Figure and Agility will move from demonstrations to targeted commercial pilot projects in warehouses and factories, with distribution of thousands or tens of thousands of units. While many industries struggle to quantify AI returns, manufacturing offers controlled environments where results are demonstrable: defect rates will drop, production will increase and cycles will be reduced.
Extended governance: AI Act, security and digital sovereignty
The European regulatory framework has become the inevitable context for any artificial intelligence initiative: AI Act, Digital Omnibus, digital sovereignty, whistleblowing; agentic artificial intelligence brings these issues to the center of daily operations. The AI Act introduces risk categories with stringent obligations of transparency, documentation, assessment and control for high-risk systems, while the need grows to demonstrate who did what, when and based on which data and logic, especially in sensitive processes such as human resources, finance, public administration, justice and healthcare; at the same time, Digital Omnibus and GDPR evolution require particular attention to legal bases, minimization, consent management, access and opposition rights.
The introduction of autonomous agents accentuates all these issues: it’s one thing to have an assistant that produces text to be reviewed, another is an agent that makes operational decisions or suggests actions that affect rights, reputation and compliance.
Against this background, identity and security become the new battlefield: Minevich predicts that in 2026 “identity, not data, will become the central focus of crime and security”, with deepfakes, impersonation and agent hijacking destined to grow rapidly to the probability of a serious public agentic artificial intelligence breach in the same year. Companies will need AI firewalls, architectures designed with built-in security, agent governance frameworks and quantum-resistant encryption, while the browser will tend to become the true enterprise operating system, with the consequence of becoming the primary target of attacks and requiring zero-trust security models integrated directly within it.
Meanwhile, digital sovereignty takes on an even more strategic connotation when talking about agents operating on sensitive data and critical systems: it’s not just about where the model resides, but who controls the entire operational chain, from orchestrators to connectors, from data repositories to logging and monitoring systems. McKinsey predicts that spending on artificial intelligence infrastructure will continue to grow toward a projected value of 7 trillion dollars, with investments in sovereign artificial intelligence at 100 billion globally in 2026.
For public administration and regulated organizations, this translates into preferring transparent and auditable stacks, European solutions or open source that reduce dependencies on a few large non-EU suppliers, and portable architectures that avoid vendor lock-in phenomena.
Rethinking processes and data for the agentic era
To move from isolated experiments to structured use of agentic artificial intelligence, it’s not enough to incorporate a model into existing systems: organizational and technological redesign work is needed that allows agents to act in a safe, controllable and compliant manner. This means mapping processes end-to-end by identifying bottlenecks, repetitive activities and manual steps that consume time and generate operational risk, understanding which decisions can be supported or partially automated while keeping the human decision-maker where judgment is needed, defining “zones of autonomy” in which to clearly establish what an agent can do automatically, what it must submit to human review and what it cannot do under any circumstances, modeling policies and operational limits directly into workflows and technical integrations, and preparing data for agentic use by improving its quality, structure and accessibility so that agents can consult it in a controlled manner.
Operational governance
When agents begin to make operational decisions, governance stops being an abstract theme and becomes a daily requirement: according to Gartner, the three main reasons for the failure of agentic artificial intelligence projects are: high costs with limited ownership; unclear business value; and inadequate risk controls.
Effective governance is based on agent traceability, clear responsibilities and continuous review cycles. Every significant agent action requires detailed and structured recording: who executed it (unique agent identifier), what it accomplished, on which data it operated, with what outcome and based on which input or instruction. Such logs serve not only compliance, but enable rapid internal audits, timely responses to authorities and continuous improvement cycles. In parallel, human responsibility roles must be clearly defined: who configures the agents, who approves operational policies, who supervises their daily functioning and who has the authority for suspensions or modifications. Agents must also be monitored over time through key performance indicators, error rates, exceptional cases and internal feedback, so as to iteratively update prompts, policies, integrations and decision logic, making governance a dynamic and data-driven process.
How to start concretely and safely
Many organizations wonder where to start, without being paralyzed by fear of risks but also without exposing themselves imprudently. A realistic path involves a joint assessment of processes and risks, identifying two or three high-value and traceable areas, such as report management, internal requests or compliance monitoring, where agentic artificial intelligence can bring measurable benefits, assessing associated risks and classifying possible systems according to AI Act categories. One then moves to designing a first “controlled agentic circuit”, i.e. a flow in which an agent operates within clear limits, with human supervision, complete logs and metrics defined from the beginning, involving from the outset functions such as regulatory compliance, IT, security, human resources or legal to avoid top-down projects or those perceived as “black boxes”; data, feedback, incidents and success cases are collected and used to improve agents, policies and integrations, progressively extending the use of agentic artificial intelligence to other processes and maintaining consistency of governance, technical standards and documentation.
Towards a new human-agent operational model
Agentic AI does not replace people, but changes the way we work: from the worker using a tool to the hybrid human-agent team that co-executes complex processes. Forrester highlights that “poor AI literacy erodes trust and slows adoption, with 21% of decision-makers citing employee experience and readiness as a barrier”, and in this new operational model, skills evolve with less time dedicated to repetitive activities and more to substantive decisions, interpretation of results, exception management and process improvement, while human resources and people-dedicated teams evolve from simple administrators to “strategic architects of AI-enhanced human performance”.
For organizations that can seize this transition, the competitive advantage will not only be technological but structural: reduced decision times, better risk control, ability to adapt quickly to new regulatory and market requirements; as Minevich concludes, 2026 will separate AI leaders who master multi-agent orchestration, governance and human-AI collaboration from those who remain stuck in experimentation, and to do this it is essential to integrate three dimensions from the start: agentic technology; governance and regulatory compliance; strategic vision on data and digital sovereignty. Companies that establish strong governance in advance will move faster with fewer surprises, while those who don’t risk falling into that 40% of canceled projects predicted by Gartner.
Sources:
-
AI4Business, “AI agentica, perché il futuro è negli strumenti più che nei modelli” (January 5, 2026) https://www.ai4business.it/intelligenza-artificiale/ai-agentica-perche-il-futuro-e-negli-strumenti-piu-che-nei-modelli/
-
Forbes, “Agentic AI Takes Over — 11 Shocking 2026 Predictions” (December 31, 2025, updated January 7, 2026) https://www.forbes.com/sites/markminevich/2025/12/31/agentic-ai-takes-over-11-shocking-2026-predictions/
-
Gartner, “Gartner Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027” (June 25, 2025) https://www.gartner.com/en/newsroom/press-releases/2025-06-25-gartner-predicts-over-40-percent-of-agentic-ai-projects-will-be-canceled-by-end-of-2027
-
Forrester, “Predictions 2026: AI Moves From Hype To Hard Hat Work” (October 27, 2025) https://www.forrester.com/blogs/predictions-2026-ai-moves-from-hype-to-hard-hat-work/
-
Gartner, “AI’s Influence Runs Deeper Than You Think — 2026 Gartner Strategic Predictions Explain Why” https://www.gartner.com/en/articles/strategic-predictions-for-2026

Marta Magnini
Digital Marketing & Communication Assistant at Aidia, graduated in Communication Sciences and passionate about performing arts.
At Aidia, we develop AI-based software solutions, NLP solutions, Big Data Analytics, and Data Science. Innovative solutions to optimize processes and streamline workflows. To learn more, contact us or send an email to info@aidia.it.



