Executive Order Strengthens Oversight of AI in Financial Services and Housing to Combat Discrimination and Safeguard Privacy
In response to the increased use and purchase of artificial intelligence (AI) and AI-enabled products, President Biden’s AI executive order (EO) instructs federal agencies to enact safeguards and protections in critical fields such as financial services and housing. To ensure the protection of critical infrastructure, the EO directs the Secretary of the Treasury to publish a report detailing best practices for financial institutions to manage cybersecurity risks related to AI within 150 days of the order. To protect against discrimination and biases in AI and AI-enabled products, the EO also encourages the Director of the Federal Housing Finance Agency (FHFA) and the Director of the Consumer Financial Protection Bureau (CFPB) to require regulated entities, where possible, to evaluate (1) underwriting models for bias affecting protected groups and (2) collateral-valuation and appraisal processes to minimize bias. The CFPB, along with the FHFA, the federal banking agencies, and the National Credit Union Administration, proposed a rule that would govern the use of AI in home value appraisals, which we covered in our Advisory here.
To combat unlawful discrimination by AI in housing and other real estate transactions, the EO directs the Secretary of Housing and Urban Development (HUD) and encourages the Director of the CFPB to issue guidance on several topics within 180 days of the order. This guidance will address how tenant screening systems may violate the Fair Housing Act, the Fair Credit Reporting Act, or other federal laws. It will also address how the Fair Housing Act, the Consumer Financial Protection Act of 2010, and the Equal Credit Opportunity Act (ECOA) apply to digital platform advertising of housing, credit, and other real estate transactions. In this regard, in June 2022, Meta settled a U.S. Department of Justice suit arising out of a HUD investigation claiming that Facebook’s algorithms unlawfully steered housing advertisements to users based on protected characteristics.
The EO encourages all independent regulatory agencies to consider using existing authorities, including their rulemaking authority, to protect consumers from fraud, discrimination, and threats to privacy, as well as any other risks that may arise from AI and AI-enabled products. To mitigate such risks to consumers, the EO encourages independent regulatory agencies, as they deem appropriate, to issue guidance that clarifies the due-diligence responsibilities of regulated entities to monitor third-party AI services and emphasizes the importance of transparency into regulated entities’ AI models.
The CFPB already has taken a number of steps in these directions.
- In 2022, the CFPB revised its Supervision and Examination Manual to specify that discrimination in consumer finance can be a prohibited unfair, deceptive, or abusive act or practice. Several banking and other trade associations recently prevailed upon a federal court to rule that the CFPB may not enforce this position against their members. The CFPB filed an appeal in response to the federal court’s ruling on November 6, 2023.
- Also last year, the CFPB announced its position that “ECOA and Regulation B do not permit creditors to use complex algorithms when doing so means they cannot provide the specific and accurate reasons for adverse actions.”
- Likewise, in June 2023, the CFPB warned, “In instances where financial institutions are relying on chatbots to provide people with certain information that is legally required to be accurate, being wrong may violate those legal obligations.”
- The agency followed up with September 2023 guidance describing how lenders must use specific and accurate reasons when using artificial intelligence and other complex models to deny credit.
With a push from the EO, it is unlikely the CFPB is done.
Financial services providers should expect the CFPB and other federal regulators to begin proposing new standards and guidance for AI safety and security within the next six months. Providers should also expect heightened regulatory scrutiny into AI and AI-enabled products that uphold consumer privacy and protect against the inadvertent discriminatory treatment of protected groups.
© Arnold & Porter Kaye Scholer LLP 2023 All Rights Reserved. This Advisory is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.