Deployers of AI Systems Beware: The FTC Begins Its Crackdown
Right before the holidays, the U.S. Federal Trade Commission (FTC) began its long-anticipated crackdown on deployers of artificial intelligence (AI) systems. While the FTC had taken action against alleged privacy-related violations in connection with AI systems, its complaint and proposed settlement regarding Rite Aid’s use of AI-based facial-recognition surveillance technology was the agency’s first foray into enforcing Section 5 of the FTC Act against non-privacy-related allegations arising out of AI deployments.
To learn more about the Rite Aid case and why we expect more of such cases to follow, please read our recent Advisory and our recent Consumer Products blog post.
For ideas on putting in place a comprehensive, ongoing system for managing their risks, the U.S. National Institute of Standards and Technology’s (NIST) Artificial Intelligence Risk Management Framework (Framework) and the accompanying Playbook explain how to govern, map, measure, and manage AI risks. Please see an earlier Enforcement Edge post for more information about the Framework.
For help with understanding or managing your company’s AI risks, please feel free to contact the authors or Arnold & Porter’s multidisciplinary Artificial Intelligence team.
© Arnold & Porter Kaye Scholer LLP 2024 All Rights Reserved. This blog post is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.