Generative Uh-Oh? What Companies Should Learn From the FTC Investigation of OpenAI
Last week’s disclosure that the FTC is investigating OpenAI and its generative artificial intelligence (AI) products and services offers a few lessons for businesses.
First, the constant headlines about plans to adopt new AI regulations tend to obscure that existing laws, many of which are long-standing, apply to this cutting-edge technology. Section 5 of the FTC Act, which is the statutory basis for the FTC investigation, is an important example.
Generative AI and other AI systems can do amazing things, but also worrisome ones. It may not be hyperbole that AI systems can harm people in all the ways that humans and our machines already do and in some fresh ways too!
Existing laws cover many of these mischiefs in some fashion. And “the AI did it” is generally not an acceptable defense. In the FTC staff’s words, “If something goes wrong — maybe it fails or yields biased results — you can’t just blame a third-party developer of the technology. And you can’t say you’re not responsible because that technology is a ‘black box’ you can’t understand or didn’t know how to test.” Companies (and individuals) developing, marketing, deploying, and using AI systems already face these legal risks.
Second, the FTC (and other federal enforcers) are asserting their jurisdiction to tackle AI risks under these existing laws. With mounting intensity, agencies (the FTC prominent among them) have cautioned companies about how these laws apply to AI (at least according to the agencies). An investigation — like the one OpenAI now confronts — was only a matter of time.
To help stave off government review of their own AI practices, businesses need an effective AI risk-management (i.e., compliance) program. Unfortunately, cookie-cutter programs are unlikely to meet a company’s needs; customizing the program is critical to match how a business uses AI, its risk appetite, its culture, and other considerations.
Third, the FTC has been rather bold in some of its assertions. The staff advised in a March blog post that Section 5’s prohibition against “unfair or deceptive acts” can apply to making, selling, or using any AI system “that is effectively designed to deceive — even if that’s not its intended or sole purpose.” However, under long-settled FTC policy, a practice is unfair only if it meets three tests, including that the practice’s harms “must not be outweighed by any countervailing benefits to consumers or competition that the practice produces.” (A separate longstanding policy on deception has its own tests.)
Time will tell how constraining these boundaries prove to be for the agency. Stay tuned!
For questions about this post or managing AI’s regulatory and other risks, please contact the author or other members of Arnold & Porter’s multidisciplinary Artificial Intelligence team.
© Arnold & Porter Kaye Scholer LLP 2023 All Rights Reserved. This blog post is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.