AI Washing: The New Fraud Frontier and What It Means for Investors and Companies

Matthew M. Clarke
Partner & Chair of Litigation · Kelley Clarke Law
Securities & Complex Civil Litigation · CA, TX, NY
A new term has entered the legal and regulatory vocabulary: AI washing. Borrowed from the concept of greenwashing — where companies exaggerate environmental credentials to attract investors — AI washing refers to the practice of making false, misleading, or materially exaggerated claims about a company’s artificial intelligence capabilities. And regulators are coming for it.
What Is AI Washing?
As investor enthusiasm for artificial intelligence has exploded over the past several years, so has the temptation for companies — startups and public issuers alike — to claim AI-driven capabilities that either do not exist or are far more modest than advertised. The SEC coined the term officially in early 2024, and has since made AI washing enforcement a stated priority.
The conduct typically falls into one of three patterns: (1) claiming AI capabilities that simply do not exist; (2) marketing human-performed processes as AI-automated; or (3) overstating the sophistication or performance of actual AI systems. All three can expose a company — and its executives — to serious legal liability.
The Enforcement Wave: SEC and DOJ Are Serious
The SEC fired its first shot in March 2024 with simultaneous enforcement actions against two investment advisory firms — Delphia (USA) Inc. and Global Predictions Inc. — both charged with making false and misleading statements about their use of AI in investment processes. One firm had even marketed itself as the “first regulated AI financial advisor.” Both settled, paying a combined $400,000 in civil penalties.
The cases escalated quickly. In January 2025, the SEC charged Presto Automation Inc. — a formerly Nasdaq-listed company — marking the first AI washing enforcement action against a public company. The SEC found a gap between the company’s AI performance claims and actual system data, and noted the company had failed to disclose that its AI speech recognition technology was owned by a third party and required significant human intervention.
Perhaps the most dramatic case came in April 2025, when the SEC and the DOJ jointly charged Albert Saniger, founder and former CEO of Nate Inc., with fraud. Saniger marketed Nate as a cutting-edge shopping app powered by AI, machine learning, and neural networks. In reality, according to the government, transactions were being processed manually by overseas contract workers. Saniger allegedly raised over $42 million on the strength of those false claims. The company collapsed after a news exposé revealed the truth.
The SEC has since created a dedicated Cybersecurity and Emerging Technologies Unit (CETU) to focus specifically on AI-related misconduct, and senior enforcement officials have publicly reiterated that rooting out AI fraud remains an immediate priority — even under the current administration’s broader deregulatory posture.
Private Litigation Is Accelerating
Regulatory enforcement is only part of the picture. Securities class action filings involving AI-related misrepresentations doubled between 2023 and 2024, and the trend has continued into 2025. Plaintiffs’ firms have sharpened their focus on the gap between AI marketing and AI reality.
Apple became the highest-profile target after disclosing in early 2025 that its heavily promoted AI features for Siri would be delayed until 2026. The stock lost nearly a quarter of its value — roughly $900 billion in market capitalization — as investors recalibrated expectations. A securities fraud lawsuit followed, alleging that Apple’s AI representations amounted to material misstatements.
Other notable cases include C3.ai, Inc., which faced a class action in August 2025 alleging it misled investors about AI adoption and performance, and Elastic N.V., sued in early 2025 over alleged overstatements of AI integration. In a March 2025 ruling, a federal court in the Southern District of New York allowed an AI washing case against DocGo Inc. to proceed, rejecting the company’s motion to dismiss.
What Companies Need to Know
The legal exposure here is not limited to obvious fraudsters. Companies operating in good faith can still face liability if their AI disclosures — in SEC filings, press releases, investor presentations, or on their websites — are materially inconsistent with actual capabilities. The SEC has treated AI-related disclosures as per se material in many contexts, meaning the standard for what counts as a harmful misstatement is a low bar to clear.
Key risk areas include:
- Marketing materials and websites that describe AI capabilities in terms that outpace the underlying technology
- SEC filings (10-Ks, 10-Qs, S-1s) that reference AI as a competitive differentiator without adequate disclosure of limitations
- Third-party AI dependencies that are not disclosed — the Presto Automation case turned partly on the failure to disclose that the AI was owned by someone else
- Human-in-the-loop processes marketed as fully automated AI solutions
- Executive statements on earnings calls or at investor conferences that go beyond what the company’s systems can actually support
The Litigation Opportunity for Investors
For investors who suffered losses after AI-related misrepresentations came to light, this is an active and developing area of securities law. The pattern is consistent across the cases: a company makes aggressive AI claims, investor enthusiasm drives up the stock price, the truth eventually surfaces — through a news report, an earnings miss, or a regulatory action — and the stock collapses. That sequence is the foundation of a securities fraud claim under Rule 10b-5.
The window to act in these cases matters. Securities fraud class actions are subject to strict statutes of limitations, and early investigation — while evidence is fresh and lead plaintiff deadlines are open — is critical.
The Bottom Line
AI washing is not a niche regulatory concern. It is an active enforcement priority for the SEC and DOJ, a growing source of private securities litigation, and a real reputational and financial risk for any company that has leaned heavily on AI claims to attract investment. Whether you are a company needing to audit your disclosures or an investor who has been harmed by inflated AI promises, the legal landscape is moving fast.
At Kelley Clarke Law, we represent clients in complex securities litigation and corporate disputes. If you have questions about AI-related disclosure obligations or believe you have been the victim of AI washing, we invite you to reach out.
Leave a comment