March 18, 2026
AI Washing: when artificial intelligence becomes an excuse
Layoffs, failed startups and broken promises: how AI has become the most convenient narrative shield of the moment, and why this concerns everyone.
There is a story that repeats itself with unsettling frequency. A company announces thousands of layoffs and the official communication speaks of a “transition toward artificial intelligence,” of “AI-enabled process optimization,” of “resource reallocation toward AI‑focused roles.” Markets react positively, the stock holds, investors applaud. Then come the uncomfortable questions: which AI, exactly? On which processes? With what measurable results? Often, there is no answer.
When a term enters the dictionary
On February 1st, 2026, the New York Times dedicated an article by Lora Kelley to the phenomenon, even including the official phonetic pronunciation: /ā-ī wȯ-shiŋ/. It’s not an ornamental detail. Newspapers like the NYT include phonetic transcription only when a term has crossed into common vocabulary. It’s how Anglophone culture certifies: this concept is here to stay.
The term was born by analogy with greenwashing—the practice of presenting oneself as more environmentally friendly than one actually is—and ethics washing, meaning the declaration of ethical principles without translating them into real behavior. Originally, it referred to companies claiming to use AI without actually doing so. In recent months, it has expanded to cover something more subtle and systematic: invoking artificial intelligence to justify corporate decisions that have far more ordinary motivations.
As the New York Times reports, citing a Forrester Research report (January 2026): “Many companies announcing AI‑related layoffs do not have mature, verified applications ready to fill those roles,” revealing a tendency to “attribute financially motivated cuts to the future implementation of artificial intelligence.”
Two phenomena, one root cause
AI washing manifests in two distinct forms, both rooted in the same logic: artificial intelligence has become such a powerful legitimizing label that it is more useful as a narrative tool than as an operational one.
The first concerns people: a company invokes AI as the cause of layoffs that are actually driven by financial, strategic, or managerial reasons. AI is not absent—it is used as a narrative shield to justify decisions already made for other reasons. According to Challenger, Gray & Christmas, in 2025 alone artificial intelligence was cited in announcements of over 54,000 layoffs in the United States. Amazon announced up to 30,000 layoffs, later officially reduced to around 14,000, initially linking the cuts to AI and then partially retracting, claiming the main reason was “reducing bureaucracy”; most analysts, however, believe those funds are intended to finance data centers. Pinterest cut 15% of its workforce to “reallocate resources toward AI‑focused roles.” HP expects 6,000 layoffs, citing “a significant opportunity to integrate AI.” In all these cases, researchers agree: the cuts were already structurally necessary due to financial reasons, post‑pandemic over‑hiring, or poor strategic decisions. AI came later, as a justification.
The second concerns the product: a company claims to offer an AI‑based solution when, in reality, processes are manual or the role of AI is marginal compared to what is declared. It is a form of technological fraud toward customers and investors, and the Builder.AI case is the most striking—and in some ways the most instructive—example.
Founded in 2016 by Sachin Dev Duggal, the British startup promised to make software development accessible to anyone thanks to Natasha, an AI assistant that built applications on demand. The story was convincing: Microsoft as a partner, hundreds of millions raised from SoftBank and the Qatar Investment Authority, a valuation of $1.5 billion. The problem is that Natasha was not an AI system. According to investigations by the Financial Times and Bloomberg, the work was actually done manually by about 700 contracted developers in India, instructed to follow scripts that simulated an automated system. The Financial Times evoked the 18th‑century “Mechanical Turk,” the chess‑playing automaton that hid a human inside, as the perfect metaphor.
As early as 2020, the Wall Street Journal had raised doubts about the technology, without consequences. The castle collapsed in May 2025, when internal investigations revealed that reported revenues ($220M) were about four times higher than actual revenues ($50M). The company declared bankruptcy, leaving more than 1,000 employees jobless and millions in unpaid cloud service debts to Amazon and Microsoft. The question that remains is perhaps more troubling than the case itself: how is it possible that, in the middle of the AI boom, no one checked what was really behind it?
On the regulatory front, the U.S. Securities and Exchange Commission (SEC) fined two investment firms, Delphia and Global Predictions, a total of $400,000 for advertising nonexistent AI capabilities. A sign that authorities are beginning to act, even if the speed of the market still far exceeds that of regulation.
In the first case, companies lie about what they intend to do; in the second, about what they are already doing. These are different deceptions in nature and target, but built from the same raw material: the fact that artificial intelligence is now such a powerful concept that it is credible in any context—even when it isn’t there.
Why it works
It’s worth asking not only who practices AI washing, but why it works so effectively. There are at least three reasons.
The first is financial. As Molly Kinder, a researcher at the Brookings Institution studying the relationship between AI and work, explained, saying “I adopted AI and optimized costs” is a far more attractive message than “the company is struggling.” In an earnings call, the phrase “artificial intelligence” functions today as a guarantee of strategic vision, regardless of how concrete or verifiable that vision actually is.
The second reason is contextual. In a climate where companies carefully weigh every public statement on sensitive economic and political issues, AI offers a justification perceived as technical and neutral, difficult to contest and free of immediate reputational fallout. “It’s not that risky to frame layoffs as AI‑related,” Kinder admitted to the Guardian, “even if the real culprit is something else.”
The third reason lies in the nature of AI itself: it is a real, rapidly evolving technology with concrete impacts already documented in many sectors. This makes it an almost impenetrable justification: who can confidently exclude that those roles won’t be automated in two years?
The problem of the “anticipatory layoff”
The most interesting contribution from the New York Times article is the description of what we might call an “anticipatory layoff”: firing people today in the name of a transformation that has not yet happened. It is not a lie in the classic sense—it is something more subtle: a future promise used retroactively as a present justification.
Peter Cappelli, professor at the Wharton School, puts it with disarming clarity: “Companies say they expect to introduce AI that will replace these jobs. But it hasn’t happened yet. And that’s a reason to be skeptical.” Forrester Research estimates that replacing 20–30% of staff with AI systems without ready, tested applications requires 18 to 24 months—assuming it works.
Data from the Yale Budget Lab confirms this view: AI has not yet significantly changed the labor market as a whole. The 700,000 tech layoffs recorded since 2022 (aggregated by Layoffs.fyi and reported by the New York Times) were primarily a correction of pandemic‑inflated hiring, not a consequence of automation. No technology has ever transformed the labor market in just a few years. “ChatGPT was released only three years ago,” noted Martha Gimbel of the Budget Lab to the Guardian—three years are far too few for even a disruptive technology to have already reshaped the labor market as a whole.
The boundary exists—and must be sought
It would be wrong, however, to conclude that every company citing AI in its layoffs is necessarily lying. The boundary between washing and real innovation exists, and it is defined by one precise element: measurability.
The Salesforce case is cited by researchers as one of the most credible: the company reduced its customer service team from 9,000 to 5,000 people after actually implementing AI agents in online support processes—activities that current AI systems can perform with sufficient reliability. Not a promise, but a documented result within a defined scope. The difference is not between those who use AI and those who don’t. It is between those who can answer the question “which process, with which system, with what measurable results” and those who cannot—or do not want to.
The risk of becoming an empty buzzword
The deeper danger is not that some companies lie. It’s that AI may become, within a few years, an empty buzzword like blockchain or NFTs: evoked, sold, promised, but not understood, not measured, not accountable to anyone. The Klarna case is illuminating: the company had automated much of its customer service with AI chatbots, only to return to hiring human operators due to poor service quality. It is not a failure of AI per se—it is the failure of a rushed implementation driven by image rather than results. As reported by the Corriere della Sera, even Sam Altman, CEO of OpenAI, acknowledged this at the India AI Impact Summit in New Delhi: “There is a bit of AI washing—the tendency to blame artificial intelligence for layoffs that would have happened anyway.”
What it means to take AI seriously
AI washing does not only harm those who practice it. It harms the entire ecosystem: it erodes customer trust, distorts market expectations, and raises the cost of credibility for those who work with rigor and transparency. Every time a company invokes AI to cover a decision already made, it becomes a little harder for those doing real innovation to be heard without first dismantling the skepticism accumulated by others.
There is also a dimension that press releases never mention: the human one. The Guardian interviewed a former Principal Program Manager at Amazon, laid off in January 2026 and kept anonymous due to severance‑related constraints. She described herself as a strong AI user, having even built AI tools for her team. Her reading of the situation is disarmingly simple: “I was laid off to save on the cost of human labor.”
The answer to all this is not to distrust artificial intelligence. It is to demand that those who use it—or claim to use it—answer concrete questions: which process, which system, what results, in what timeframe. Not the promise of a future transformation, but the documentation of a present change. Between “AI will transform work” and “AI has already transformed this process, in this way, with these results” lies a distance that is not merely semantic. It is the distance between those who build and those who claim to build. And it is precisely there that the sector’s credibility will be decided in the coming years.
Sources:
-
Corriere della Sera, “AI-washing, la nuova scusa delle big tech per licenziare migliaia di persone”, Eugenio Spagnuolo (23 febbraio 2026). https://www.corriere.it/tecnologia/26_febbraio_23/ai-washing-la-nuova-scusa-delle-big-tech-per-licenziare-migliaia-di-persone-7db4b9e0-ddc2-4a1a-b28e-bbea9df98xlk.shtml
-
La Repubblica, “Il clamoroso fallimento di Builder, la startup che invece di un’AI aveva 700 programmatori indiani”, Arcangelo Rociola (6 giugno 2025). https://www.repubblica.it/tecnologia/2025/06/06/news/builder_ai_fallimento_indiani_intelligenza_artificiale-424651787/
-
The Guardian, “US companies accused of ‘AI washing’ in citing artificial intelligence for job losses”, Eric Berger (8 febbraio 2026). https://www.theguardian.com/us-news/2026/feb/08/ai-washing-job-losses-artificial-intelligence
-
The New York Times, “Did A.I. Take Your Job? Or Was Your Employer ‘A.I.-Washing’?”, Lora Kelley (1 febbraio 2026). https://www.nytimes.com/2026/02/01/business/layoffs-ai-washing.html
-
Forrester Research, “Forrester: AI-Led Job Disruption Will Escalate, While Fears Of A Job Apocalypse Are Overstated” (13 gennaio 2026). https://www.forrester.com/press-newsroom/forrester-impact-ai-jobs-forecast/
-
Challenger, Gray & Christmas, Report annuale 2025 (dicembre 2025 / pubblicato gennaio 2026). https://www.challengergray.com/wp-content/uploads/2026/01/Challenger-Report-December-2025.pdf
-
Yale Budget Lab, “Evaluating the Impact of AI on the Labor Market: Current State of Affairs”, Martha Gimbel, Molly Kinder, Joshua Kendall and Maddie Lee (1 ottobre 2025). https://budgetlab.yale.edu/research/evaluating-impact-ai-labor-market-current-state-affairs
-
Agenda Digitale, “Aziende AI che ingannano il mondo: ecco l’AI Washing”, Maurizio Carmignani (13 giugno 2025). https://www.agendadigitale.eu/cultura-digitale/aziende-ai-che-ingannano-il-mondo-ecco-lai-washing/
-
Fanpage, “L’intelligenza artificiale è un’ottima scusa per licenziarvi: benvenuti nell’era dell’AI Washing”, Valerio Berra (13 marzo 2026). https://www.fanpage.it/innovazione/tecnologia/lntelligenza-artificiale-e-unottima-scusa-per-licenziarvi-benvenuti-nellera-dellai-washing/

Marta Magnini
Digital Marketing & Communication Assistant at Aidia, graduated in Communication Sciences and passionate about performing arts.
At Aidia, we develop AI-based software solutions, NLP solutions, Big Data Analytics, and Data Science. Innovative solutions to optimize processes and streamline workflows. To learn more, contact us or send an email to info@aidia.it.
Latest news

March 25, 2026
Digital Architectures in Manufacturing: from ISA-95 to IT/OT Convergence

March 18, 2026
AI Washing: when artificial intelligence becomes an excuse

March 12, 2026
Mecspe 2026: what the manufacturing industry confirmed to us about AI
