The Crackdown Commences: The FTC’s Case Against Rite Aid’s Deployment of AI-Based Technology
On December 19, 2023, the U.S. Federal Trade Commission (FTC) put a big lump of coal in Rite Aid’s stocking. The agency filed a complaint and proposed settlement regarding the pharmacy chain’s use of artificial intelligence (AI)-based facial-recognition surveillance technology. The complaint alleges that Rite Aid violated Section 5 of the FTC Act, 15 U.S.C. § 45, by using facial-recognition technology to identify shoplifters in an unfair manner that harmed consumers. The FTC further alleges that Rite Aid violated a 2010 FTC settlement with Rite Aid (the 2010 Order), by failing to employ reasonable and appropriate measures to prevent unauthorized access to personal information. While some may have missed this news in the run-up to the holidays, it marks a major step by the FTC to discipline businesses deploying AI systems — and provides lessons for companies seeking to avoid similar consequences.
For several years, the FTC has warned that it will use its Section 5 power against unfair and deceptive trade practices to penalize deployers of AI and other automated decision-making (ADM) systems that fail to take reasonable steps to protect consumers from harms resulting from inaccuracy, bias, lack of transparency, and breaches of privacy, among others (see the FTC’s blog post and our prior Advisory). While the FTC had taken action against alleged privacy-related violations in connection with AI systems (e.g., the Everalbum and Weight Watchers cases) and had issued repeated warnings to businesses deploying AI and other ADM systems (see our March 7, 2023, March 29, 2023, and April 26, 2023 blog posts) that their actions might violate Section 5, the FTC had not actually used Section 5 to address such non-privacy-related violations before the Rite Aid case. Now, the FTC’s crackdown has begun, proving that those threats were not empty.
Background of the Case
According to the FTC’s complaint, Rite Aid deployed AI-based facial-recognition technology to identify potential shoplifters at certain of its stores. This technology was trained using Rite Aid’s record of individuals who it believed had engaged or attempted to engage in criminal activity in one of its stores. Many of the tens of thousands of images Rite Aid used to train its model, however, were allegedly low-quality and taken from security or phone cameras.
The FTC claims that, in operation, the system yielded many erroneous matches and that Rite Aid employees, relying inappropriately on those results, increased surveillance of certain customers, forced customers to leave stores, falsely accused customers of shoplifting — embarrassing them in front of family, bosses, and coworkers — and even reported customers to the police. The FTC alleges that the misidentifications disproportionately involved people of color and women. This allegation of particularly poor performance regarding minorities and women is consistent with other research into facial-recognition systems (although the U.S. National Institute of Standards and Technology has found that some systems do not produce such high rates of false positives or such pronounced demographic differences in false positives). Indeed, many studies have identified discriminatory treatment of minorities, women, and other protected classes as a frequent failing of other types of AI models too.
The complaint’s description of Rite Aid’s practices reads like a how-to manual for maximizing the risks of AI deployment. It asserts:
In connection with deploying facial recognition technology in a subset of its retail pharmacy locations, Rite Aid has failed to take reasonable measures to prevent harm to consumers. Among other things, Rite Aid has:
a. Failed to assess, consider, or take reasonable steps to mitigate risks to consumers associated with its implementation of facial recognition technology, including risks associated with misidentification of consumers at higher rates depending on their race or gender;
b. Failed to take reasonable steps to test, assess, measure, document, or inquire about the accuracy of its facial recognition technology before deploying the technology;
c. Failed to take reasonable steps to prevent the use of low-quality images in connection with its facial recognition technology, increasing the likelihood of false-positive match alerts;
d. Failed to take reasonable steps to train or oversee employees tasked with operating facial recognition technology and interpreting and acting on match alerts; and
e. Failed to take reasonable steps, after deploying the technology, to regularly monitor or test the accuracy of the technology, including by failing to implement any procedure for tracking the rate of false positive facial recognition matches or actions taken on the basis of false positive facial recognition matches.
The key word here is “reasonable.” Under long-settled FTC policy, a practice is unfair under Section 5 only if it meets three tests: the practice causes “substantial” injury to consumers; the injury “must not be outweighed by any countervailing benefits to consumers or competition that the practice produces; and it must be an injury that consumers themselves could not reasonably have avoided.” (A separate longstanding policy on deception has its own tests.) Use of an AI system is not inherently unfair because the system sometimes makes mistakes. Rather, context matters.
Although the FTC’s allegations regarding Rite Aid’s deployment of AI do not focus on privacy and security violations, the complaint also claims that Rite Aid breached the 2010 Order, which required Rite Aid to implement and maintain a comprehensive information security program and retain documents relating to its compliance with that requirement. In particular, that program was to include “development and use of reasonable steps to select and retain service providers capable of appropriately safeguarding personal information they receive from [Rite Aid], and requiring service providers by contract to implement and maintain appropriate safeguards.”
In the complaint, the FTC alleges that, while Rite Aid did develop such an information security program, it regularly failed to: (1) use reasonable steps to select service providers capable of appropriately safeguarding personal information; (2) periodically reassess service providers; and (3) require service providers by contract to implement and maintain appropriate safeguards for personal information they received from Rite Aid. These charges underscore that businesses must take care in how they entrust personal information with service providers, regardless of the context.
The Settlement
Under the proposed settlement of the FTC’s current charges, Rite Aid will be subject to an array of obligations. First, Rite Aid may not use facial-recognition technology for the next five years, other than for certain employment and healthcare uses, and then only if it obtains “Affirmative Express Consent” from targeted persons.
Second, Rite Aid will be required to destroy all photos and videos used or collected in connection with its facial-recognition program and, notably, all “data, models, or algorithms derived in whole or in part therefrom.” Such “algorithmic disgorgement” has become a regular feature of the FTC’s Section 5 enforcement when it believes personal information has been collected improperly (e.g., Cambridge Analytica, Everalbum, Weight Watchers, and Amazon). In this case, Rite Aid also must ensure that their service providers and other third parties that received photos and videos of consumers in connection with Rite Aid’s facial-recognition program delete those photos and videos, as well as any derived data, models, or algorithms.
Third, before using any AI-based “Automated Biometric Security or Surveillance System” (including, but not limited to, facial recognition once the five-year prohibition is over), Rite Aid must establish, implement, and maintain a risk-management program assessing and mitigating the system’s risks, including through contracting requirements, user training, system monitoring and testing, data-quality governance, corporate governance and oversight, notification of information collection and automated decisions to those affected, and a complaint and recourse mechanism for affected individuals leading to a timely review and response. As with the prohibition on facial-recognition technology, this requirement would not apply in certain employment and healthcare contexts if Affirmative Express Consent is obtained.
Fourth, Rite Aid must clearly and conspicuously disclose its use of any Automated Biometric Security or Surveillance System.
Fifth, Rite Aid generally must delete all biometric information collected in connection with an Automated Biometric Security or Surveillance System after five years (shorter if not reasonably necessary to retain for the full five years).
Sixth, Rite Aid must refrain from making misrepresentations related to the privacy and security of a dozen specified categories of personal information.
Seventh, Rite Aid must implement an information security program satisfying numerous detailed requirements. It also must engage an independent, third-party “Information Security Assessor” (satisfactory to the FTC) to perform biennial reviews of the information security program; its effectiveness; and any gaps or weaknesses in, or instances of material noncompliance with, the information security program. Rite Aid must provide these assessments to the FTC.
Eighth, Rite Aid’s CEO must certify compliance with the settlement to the FTC annually.
Ninth, Rite Aid must report all notifiable data breaches to the FTC.
Tenth, Rite Aid must adhere to various recordkeeping requirements.
Finally, Rite Aid agreed to certain compliance monitoring by the FTC.
Other than the five-year ban on using facial-recognition technology, these provisions will last for 20 years.
Implications for Compliance Programs
Risk-management programs like the one to which Rite Aid agreed are prudent not just for companies that come under FTC scrutiny, however. As FTC Commissioner Alvaro M. Bedoya’s separate statement underscores — at least for certain enforcers — the proposed settlement prescribes a
baseline for … a comprehensive algorithmic fairness program. … Beyond giving people notice, industry should carefully consider how and when people can be enrolled in an automated decision-making system, particularly when that system can substantially injure them. In the future, companies that violate the law when using these systems should be ready to accept the appointment of an independent assessor to ensure compliance.
Conclusion
In light of the FTC’s prior warnings, Commissioner Bedoya’s statement, and previous statements from Chair Lina Khan (see statement of April 25, 2023 and articles dated May 3, 2023 and June 1, 2023) and Commissioner Rebecca Kelly Slaughter, (see statement of January 24, 2020 and article dated August 2021), the agency’s crackdown is likely to be neither a one-off nor just related to biometric surveillance. Instead, we expect it to mark the beginning of active FTC enforcement against allegedly unfair or deceptive use of ADM systems. Commissioner Bedoya explains:
It is my view that Section 5 of the FTC Act requires companies using technology to automate important decisions about people’s lives — decisions that could cause them substantial injury — to take reasonable measures to identify and prevent foreseeable harms. Importantly, these protections extend beyond face surveillance. Indeed, the harms uncovered in this investigation are part of a much broader trend of algorithmic unfairness — a trend in which new technologies amplify old harms.
Businesses would be wise to take heed. Otherwise, they too may receive an unpleasant “gift” from government enforcers.
For questions about this advisory or managing AI’s regulatory and other risks, please contact the authors or other members of Arnold & Porter’s multidisciplinary Artificial Intelligence team.
© Arnold & Porter Kaye Scholer LLP 2024 All Rights Reserved. This Advisory is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.