Skip to main content
All
March 20, 2024

SEC Targets “AI Washing” in First of Its Kind Enforcement Matters

Advisory

On March 18, 2024, the Securities and Exchange Commission (SEC or Commission) announced settled charges against two investment advisors for overstating their purported use of, and expertise in, artificial intelligence (AI) for the benefit of clients in violation of certain provisions of the Investment Advisors Act of 1940. These cases are the first out of the Division of Enforcement to focus on what the SEC has deemed “AI Washing” — making false or unfounded AI-related claims — a term intended to liken the practice to the “greenwashing” phenomenon which has already been the target of an SEC crackdown as discussed in our October 2023 Advisory.

This Advisory discusses public pronouncements made by senior SEC officials before the announcement of these settled actions, the allegations of the settled actions themselves, as well as practical considerations for companies, investment advisors, and broker-dealers to consider going forward as the SEC continues to prioritize and expand AI-related enforcement activity. Although these two actions involve regulated financial service entities, we fully expect the SEC will apply similar scrutiny to public issuers and other regulated entities across all industries. The reality is that all companies are under competitive and investor pressure to employ AI, both to gain competitive advantage and increase efficiencies; in doing so, they must ensure that they are deploying AI as claimed and in a responsible fashion.

SEC Focus On AI Washing

The SEC signaled that AI Washing would become a focus as early as October 2023, when an associate regional director in the SEC’s New York office stated the Commission was looking at instances where publicly traded companies and investment advisors were claiming to use AI when they were not. In early December 2023, SEC Chair Gary Gensler also began discussing AI Washing, cautioning businesses: “Don’t do it. … One shouldn’t greenwash and one shouldn’t AI wash.” He responded to an audience question by stating that misleading investors about AI is governed by the same “set of basic laws, but also the same basic concept” as misleading the public or investors in other spaces.

In February 2024, Gensler discussed AI Washing in prepared remarks at Yale Law School, analogizing the practice to The Music Man where “traveling salesman ‘Professor’ Harold Hill … cons the town into purchasing musical instruments….” Gensler explained, “[w]e’ve seen time and again that when new technologies come along, they can create buzz from investors, as well as false claims from the Professor Hills of the day. If a company is raising money from the public, though, it needs to be truthful about its use of AI and associated risk.” Gensler further cautioned that “[i]nvestment advisors or broker-dealers also should not mislead the public by saying they are using an AI model when they are not, nor say they are using an AI model in a particular way but do not do so. Such AI washing, whether it’s by companies raising money or financial intermediaries … may violate the securities laws.” He closed his remarks by stating, “if you are AI washing, as ‘Professor’ Hill sang, ‘Ya Got Trouble.’”

The two settled enforcement actions against investment advisors for false and misleading statements in violation of the federal securities laws’ antifraud provisions demonstrate that Chair Gensler was not making empty threats. Rather, they show that the SEC will invest enforcement resources to pursue what it considers misleading statements to the public about AI.

Findings Against Delphia

In the first settled order, the SEC found that Delphia violated Section 206(2) of the Advisors Act by making false and misleading statements between 2019 and 2023 regarding its purported use of AI and machine learning to incorporate client data in its investment process. These statements were made in Delphia’s SEC filings, in a press release, and on its website. As one example, Delphia claimed on its website that it “turns your data into an unfair investing advantage” and “put[s] collective data to work to make our artificial intelligence smarter so it can predict which companies and trends are about to make it big and invest in them before everyone else.” The SEC found, however, that Delphia did not have the AI and machine learning capabilities that it claimed. The SEC also brought charges pursuant to Section 206(4) of the Advisors Act and Rule 206(4)-1, otherwise known as the Marketing Rule, which prohibits a registered investment advisor from disseminating any advertisement that includes any untrue statement of material fact. To settle these charges, Delphia agreed, without admitting or denying the SEC’s findings, to pay a civil penalty of $225,000 and accept a censure and cease and desist order.

Findings Against Global Predictions

In the second order, the SEC found that Global Predictions also violated Section 206(2) of the Advisors Act by making false and misleading claims on its website, in emails, and on social media in 2023 regarding its purported use of AI. As one example, Global Predictions claimed to be the “first regulated AI financial advisor” across all of these platforms, a claim the SEC found the firm could not substantiate. As another example, Global Predictions stated on its website that its platform provided “[e]xpert AI-driven forecasts,” while the SEC found it did not. To settle these and other charges unrelated to AI Washing, Global Predictions agreed, without admitting or denying the SEC’s findings, to pay a civil penalty of $175,000 and accept a censure and cease and desist order.

Takeaways

As Chair Gensler acknowledged in a video statement released following the announcement of these settled charges, “AI is the most transformative technology of our time,” which has “the potential benefits of greater inclusion, efficiency, and user experience.” But with this new technology comes the risk that companies, investment advisors, or broker-dealers may mislead investors regarding their use of AI — whether intentionally or unintentionally — and find themselves facing potential liability under various securities laws.

Highlighting the focus on AI Washing, the SEC’s Enforcement Director Gurbir Grewal released what may be a first-of-its-kind video announcement of charges from the Division of Enforcement, noting that “[h]istory teaches us that as AI continues to develop, firms will rush to capitalize on investor interest by promoting their supposed use of AI. … Today’s enforcement actions should serve notice to the investment industry that if you claim to use AI in your investment processes, you must ensure that your representations aren’t false, they aren’t misleading.”

Importantly, Director Grewal separately stated that these cases are only the start of the Commission’s action against misuse of AI, and the SEC will be “looking for misstatements, … for breaches of fiduciary duties by advisors,” and for instances where AI is used in market manipulation.

While AI Washing and other potential abuses of AI will continue to be a rapidly developing area of examinations and enforcement over the coming months and years, the following are some considerations and recommendations that companies should consider implementing into compliance and disclosure processes in the short term:

  • Discussing AI in earnings calls or having extensive discussions about AI with the board may be a sign that those AI matters are potentially material and should be disclosed in public filings.
  • When disclosing material risks about AI — including operational, legal, and competitive risks — consider disclosures particularized to the company and its use of AI rather than boilerplate language.
  • Claims about AI prospects should have a reasonable basis, and investors should be told that basis.
  • When making disclosures, consider defining what the company means by “AI,” including how and where it is being used by the company and whether it is being developed in-house or supplied by others.
  • For investment advisors or broker-dealers, be cautious about public claims that you are using an AI model, generally, or claims that you are using an AI model in a particular way.
  • Ensure that policies, procedures, and disclosure controls address the use of AI and provide checks-and-balances to ensure both validity to the claimed use of AI and consistency of message across all communications platforms. The SEC has brought standalone cases based solely on the lack of disclosure controls at a company, notably without pointing to any false or misleading statements or omissions. We discussed one such case in our February 2023 Advisory.

Arnold & Porter continually monitors the rapidly evolving AI-related and other technological developments and recommends best practices for our clients, including on how to reduce tech-related enforcement risk and how to implement proper risk management or disclosure processes. If you are seeking advice on how to mitigate risks in connection with these issues, or with SEC compliance and enforcement more broadly, please reach out to the authors of this Advisory or your regular Arnold & Porter contact.

© Arnold & Porter Kaye Scholer LLP 2024 All Rights Reserved. This Advisory is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.