AI Washing: The Next Compliance Risk General Counsel Should Not Ignore

AI Washing
AI Washing

Corporate regulators have seen this before. Greenwashing put environmental claims under the microscope. Innovation washing followed, where businesses overstated the novelty or impact of products to attract investment and market attention.

Now AI washing is firmly on the agenda.

Across the technology sector, vendors are racing to label products as AI native or AI powered. In many cases, the underlying functionality is standard automation, rules based workflows or long established analytics tools that have been rebranded. Sometimes it is a GenAI tool with a “wrapper” or plugged in without consideration to governance

For General Counsel and those managing regulatory compliance and risk, this is not simply a marketing issue. It creates legal, regulatory and operational risk. And regulators will be watching.

What is AI Washing?

AI washing occurs when organisations exaggerate or misrepresent the use of artificial intelligence in their products or services.

It can take several forms:

  • Claiming a product uses AI when it relies on automation, machine learning or deterministic logic
  • Overstating the sophistication, autonomy or learning capability of a system
  • Failing to explain limitations, human oversight or data dependencies
  • Marketing generic AI tools as advanced AI

In a competitive technology market, the commercial pressure to position products as AI enabled is significant. Inflated claims can quickly cross into misleading or deceptive conduct.

For corporate legal teams procuring technology, this also presents a due diligence challenge.

Regulatory Scrutiny is Increasing

Regulators globally have signalled that AI washing is in scope for enforcement.

In the United States, the Securities and Exchange Commission has taken action against investment firms for misleading statements about their use of AI. The Federal Trade Commission has also warned companies against unsubstantiated AI claims in marketing materials.

In the United Kingdom and Australia, regulators have reinforced that existing consumer protection and financial services laws apply to AI related representations. Misleading statements about AI capabilities can breach established obligations, even without AI specific legislation.

At the same time, broader AI regulatory frameworks are emerging. The EU AI Act introduces obligations tied to risk classification, transparency and governance. Organisations making claims about AI functionality will need to substantiate those claims within a structured compliance regime.

The direction is clear. Regulators are not waiting for AI specific rules to address misleading conduct. They are using existing powers now.

Why This Matters for General Counsel

For General Counsel, AI washing intersects with several areas of responsibility:

  • Procurement risk
  • Disclosure obligations
  • Marketing and investor communications
  • Data governance and privacy
  • Reputational risk

If your organisation is purchasing AI enabled tools, over reliance on inflated vendor claims can result in poor performance, unexpected compliance gaps or security exposures.

If your organisation is selling or using AI enabled products for operational advantage, unverified claims can create regulatory exposure and shareholder risk.

GCs sit at the centre of these conversations. They are increasingly expected to validate that AI claims made internally and externally are accurate, supportable and appropriately qualified.

What Trends are Emerging?

Several trends are becoming clear.

First, regulators are focusing on transparency. Organisations must be able to explain how an AI system works at a high level, what data it uses and what limitations apply.

Second, governance expectations are rising. Boards are expected to have oversight of AI deployment and associated risks. Legal teams are often tasked with designing or supporting these governance frameworks.

Third, enforcement is likely to be thematic. Once a regulator signals concern in a sector, multiple organisations can come under review.

Fourth, procurement standards are tightening. Large corporates are building more rigorous AI due diligence questionnaires into vendor onboarding processes, covering model training data, bias testing, security controls and human oversight.

AI claims are no longer accepted at face value.

What Should GC’s be Watching for?

General Counsel should be asking direct, practical questions.

When procuring technology:

  • What specific AI techniques are being used?
  • Is the system deterministic or does it rely on machine learning models?
  • What data was used to train the model?
  • How is performance validated and monitored?
  • What human oversight is built into decision making?
  • How are security and privacy risks managed?

When reviewing internal communications:

  • Are marketing claims technically accurate?
  • Can statements about AI capability be substantiated?
  • Are limitations clearly disclosed?
  • Is there alignment between product teams, marketing and legal?

When reporting to the board:

  • Is there clear oversight of AI deployment?
  • Are AI risks incorporated into enterprise risk frameworks?
  • Is there a documented governance model?

These are governance fundamentals. The label AI does not change that.

A Practical Approach

AI washing thrives where there is ambiguity. The solution is clarity and evidence.

Legal teams should work closely with technology and product leaders to ensure there is a shared understanding of what AI functionality actually exists. Claims should be mapped to documented technical capabilities. Marketing language should be reviewed with the same discipline applied to financial disclosures.

From a procurement perspective, AI due diligence should be structured and repeatable. This aligns with broader legal operations objectives of consistency, visibility and risk control.

AI presents real opportunity. It can drive efficiency, insight and better decision making across legal functions.

Inflated claims undermine trust and create unnecessary risk.

For General Counsel, the opportunity is straightforward. Cut through the hype. Demand transparency. Embed governance early.

Organisations that treat AI with the same discipline applied to other regulated claims will be better placed to maximise its value without inheriting avoidable risk.

Share

Share