Menu di accessibilità (premi Invio per aprire)

May 6, 2026

Improper Use of AI: When Automation Bias Becomes a Risk

From the Nebraska case to the CSET framework: when trust in automated tools surpasses human oversight.

Automation bias is the tendency to blindly trust algorithms and AI, mistaking their outputs for absolute truths and underestimating the importance of human judgment.

Artificial intelligence is increasingly present in everyday professional activities: it supports research, speeds up writing, summarizes documents, and helps make decisions more quickly. But this ease of use can generate an underestimated risk: relying on AI output without critically verifying it. When this happens, the problem is not only technical. It becomes a matter of professional responsibility, work quality, and process reliability.

A recent case illustrates this clearly: a Nebraska lawyer filed an appeal brief containing non‑existent case law citations, and the court referred him to disciplinary authorities. It is not an isolated episode; similar cases have occurred in Italy as well, with sanctions for gross negligence. The point is not to count errors: it is to understand what happens when professionals stop checking. This is where automation bias comes into play.


What automation bias is

Automation bias is the tendency of an individual to place excessive reliance on an automated system, favoring the machine’s output or suggestion even when contradictory information exists. According to the Center for Security and Emerging Technology (CSET) at Georgetown University, the phenomenon manifests in two main forms: omission error, when the user does not intervene because the system did not issue an alert, and commission error, when the user actively follows an incorrect system recommendation. In both cases, the result is the same: the user’s ability to exercise real control over the AI is progressively eroded.

CSET identifies three levels that influence automation bias: user behavior, technical system design, and organizational culture. None of the three operates in isolation. As the report notes, factors such as time pressure, task difficulty, workload, and stress can further exacerbate risk when they act in combination. In other words: when pressured users, interfaces that hide uncertainty, and processes without verification converge in the same context, the effect is not additive—it is systemic. This is exactly the mechanism at work in the Nebraska lawyer case: not an isolated technical error, but the result of a process in which no level—individual, technical, or organizational—exercised the necessary oversight.

The CSET report also provides an important lesson that goes beyond individual episodes: automation bias does not arise only from distracted or negligent users. It can be institutionalized through organizational choices. In the case study of the Patriot missile system, the U.S. Army had structured doctrine, training, and procedures in a way that favored the system’s automatic mode: the result was that operators, nominally in control, had effectively ceded decision‑making responsibility to the machine. According to CSET, when personnel are not adequately trained, even a system that includes human supervision can in practice operate in a fully automated manner.


The Lawyer Case: not a technical anomaly, but a process failure

When an organization introduces an AI tool without defining who verifies what and who is accountable for the output, it effectively institutionalizes the possibility of error. The Nebraska case shows this unmistakably.

The court clarified that the central issue was not proving definitively whether AI had been used. The point was that the brief contained numerous false statements of law, and a basic check on legal databases such as Westlaw, LexisNexis, and Nebraska’s public repositories would have shown that many of the citations did not exist at all or did not correspond to real rulings.

The “broken screen” explanation might, at best, justify a few typos or a mis‑transcribed page number. It does not explain completely invented cases, constructed with plausible details but lacking any basis in legal reality. And it is precisely this formal plausibility that makes the error difficult to detect: language models do not signal uncertainty, do not warn when they are “guessing”; they produce fluent, convincing text regardless of accuracy. The difference between a copying error and a non‑existent citation is not a nuance: it is the difference between a slip and a serious professional omission.

This is the principle the court highlights, and it applies far beyond the legal field: when content produced or assisted by an automated tool enters a professional context, the responsibility to validate it does not transfer to the machine. It remains entirely with the person who signs it.


Italy and Europe: we are not immune to the risk

Thinking that the Nebraska case is an exotic American anomaly would be a misjudgment. Italy and Europe have also seen episodes of excessive trust in automated systems, confirming that automation bias knows no geographical boundaries—only procedural gaps.

The Court of Siracusa recently fined a lawyer more than €30,000 for citing four non‑existent Supreme Court precedents in a defense brief, generated by AI without verification. Similar “phantom rulings” have even reached the Italian Supreme Court, with filings containing formally perfect references that had no match in official databases.

Automation has also shown its limits in the Italian public administration: public competitions with automatic candidate exclusions for formal errors detected by digital platforms, often overturned by administrative judges due to lack of human review. The case known as the “Frank algorithm” highlighted risks in HR processes, with dismissals based on algorithmic decisions later challenged in court.

These episodes confirm that the problem is not the tool, but the adoption methodology. The message for law firms and companies is clear: automation speeds up processes, but without active human oversight it risks turning efficiency into a boomerang of professional liability.

AI bias and automation bias are not the same thing

To truly understand the issue, it is useful to distinguish between AI bias and automation bias. IBM defines AI bias as the occurrence of distorted results due to human prejudices that alter the original training data or algorithm, producing potentially harmful outputs. In this case, the problem originates in the system: in the data, sampling, labels, measurement criteria, or model design.

Automation bias, instead, concerns human behavior in front of the system. Even a questionable, incomplete, or incorrect output may be accepted without sufficient verification if it appears plausible, fluent, or formally convincing. In other words, AI bias concerns how the system produces a result; automation bias concerns how humans receive and approve it.

This distinction is crucial because it explains why improper use of AI cannot be solved merely by improving the model. It also requires an operational culture that maintains active and conscious human oversight.


Where AI distortions come from

IBM lists several sources of distortion in AI systems. These include cognitive bias, reflecting unconscious human prejudices; sampling or selection bias, when data do not sufficiently represent the target population; measurement bias, due to incomplete data; stereotype bias; exclusion bias, when important elements are left out of the dataset; and algorithmic bias, which can produce misleading results if the problem is poorly formulated or feedback does not properly guide the system.

These distortions are not marginal. IBM notes that if not addressed, they can reduce system accuracy, hinder an organization’s ability to derive value from AI, and fuel reputational and brand damage, as well as generate distrust among historically marginalized groups. This is worth emphasizing: the cost of bias does not fall only on the technical team or individual operator, but on the entire organization and its relationship of trust with clients, users, and stakeholders.

How to reduce risk: governance, verification, and tools

Sources converge on a clear answer: to limit bias and errors, organizations need governance, continuous monitoring, and human‑in‑the‑loop. IBM explains that identifying and resolving distortions requires AI governance—policies, practices, and frameworks capable of guiding the responsible development and use of systems.

CSET recommends three lines of intervention, corresponding to the three identified risk levels:

  • User level: create and maintain qualification standards for understanding the tools. Disparities between perceived and actual system capabilities are among the main causes of incidents.

  • Design level: design consistent interfaces that make system uncertainty visible and do not mask limitations with fluent, persuasive outputs. Any deviation from an established design philosophy must be clearly communicated to users.

  • Organizational level: periodically review policies based on the real capabilities of the systems used, updating them as technology evolves. If organizational goals and governance policies are not aligned, automation bias is almost inevitable.

CSET concludes that human‑in‑the‑loop alone is not enough to prevent errors or incidents. What matters is the calibration between technical oversight and human oversight, adapted to the context, risk level, and actual capabilities of the users involved.

In practice, this means treating AI as a collaborator to supervise, not an authority to follow. Every output—text, citation, data, recommendation—should be considered a draft until proven otherwise. In contexts where errors have formal or legal consequences, this verification is not an extra precaution: it is an integral part of professional performance. The AI Act (EU Regulation 2024/1689) explicitly reiterates this in Article 14 (Human Oversight), which states that high‑risk systems must be designed to allow effective—not merely formal—human supervision, with the ability to intervene and override automated decisions. Technology can accelerate work, but it cannot replace the judgment of those who are accountable for it.


What organizations should do

For a company, using AI well does not simply mean introducing a new tool. It means designing processes that prevent automation bias from becoming everyday practice. IBM emphasizes that distorted systems can harm organizations and society, eroding the trust of clients, users, and stakeholders. CSET stresses that user training is one of the three fundamental levers for reducing automation bias risk, and that no technical measure can compensate for its absence.

In this sense, several measures are particularly relevant:

  • Define which activities can be supported by AI and which require mandatory human review: CSET distinguishes between nominal supervision and real supervision; the difference between the two can determine the outcome of an error.

  • Build diverse and multidisciplinary teams: IBM notes that the more varied the team—in role, training, and perspective—the more likely it is to detect distortions in data and processes before they cause harm.

  • Test models with real data and update controls over time: IBM recommends continuous monitoring, because no model is permanent and distortions can emerge even after deployment.

  • Document decisions made about the AI systems in use: recording which tools are used, for which activities, and with which known limitations is the first step toward making the process verifiable and improvable over time—and preventing uncritical use from becoming normalized by inertia.

The key point is that AI adoption cannot be based solely on efficiency. It must also be based on reliability, verifiability, and accountability.


Human oversight is not optional

The most useful rule is to treat AI output as a draft that appears authoritative only on the surface, not as content already validated. If a text, citation, summary, or recommendation is used in a professional context, it must be checked by the person who assumes final responsibility.

The lawyer case shows exactly this: it is not enough for information to seem credible for it to be reliable. And it is not enough for a tool to be useful for every result it produces to be correct. AI can be a real advantage, but only if embedded in a process that preserves doubt, control, and verification as essential parts of the work.


Sources:

Marta Magnini

Marta Magnini

Digital Marketing & Communication Assistant at Aidia, graduated in Communication Sciences and passionate about performing arts.

Aidia

At Aidia, we develop AI-based software solutions, NLP solutions, Big Data Analytics, and Data Science. Innovative solutions to optimize processes and streamline workflows. To learn more, contact us or send an email to info@aidia.it.