FinCEN Issues Alert on the Dangers of Artificial Intelligence Deepfakes
On November 13, 2024, the Financial Crimes Enforcement Network (FinCEN) issued an alert to help financial institutions identify and guard against fraud schemes associated with using deepfake media created with generative artificial intelligence (GenAI) tools. Deepfakes use artificial intelligence to create realistic but fake media, including text, pictures, audio, and videos. FinCEN’s alert mirrors increased concern on the part of state regulators, such as the New York State Department of Financial Services (NYDFS), which also recently issued guidance regarding risk management practices to combat GenAI-related fraud.
In the past year, FinCEN has identified an increase in the use of deepfakes in fraud schemes targeting institutions and individual customers. GenAI can now produce deepfake content that is difficult to distinguish from human-generated media. For example, criminals use deepfake technology when opening accounts at financial institutions to alter or create fraudulent identification documents and slip past customer due diligence processes. Consumers themselves are also vulnerable to deepfake content. Criminals can fabricate artificial voices and fake video footage and use them to scam and defraud consumers. To illustrate, FinCEN cited an incident earlier this year in which fraudsters used deepfake technology to create what appeared to be the CFO of a multinational company on a video conference instructing a finance worker to wire $25 million into the fraudsters’ account.
FinCEN’s Recommendations on How to Identify and Guard Against Deepfakes
FinCEN identified several red flags that financial institutions should watch for in order to prevent the use of deepfakes in fraudulent schemes that target consumers and their financial institutions:
- A customer’s photo is inconsistent with other information about the customer (e.g., the customer appears much younger than their date of birth would indicate).
- A customer presents multiple identity documents that are inconsistent with each other.
- A customer uses a third-party webcam plugin during a live verification check or attempts to change communication methods during a live verification check.
- A customer declines to use multifactor authentication to verify their identity.
- A reverse-image lookup or open-source search of an identity photo matches an image in an online gallery of GenAI-produced faces.
- A customer’s photo or video is flagged by commercial or open-source deepfake detection software.
- GenAI-detection software flags the potential use of GenAI text in a customer’s profile or responses to prompts.
- A customer’s geographic or device data is inconsistent with the customer’s identity documents.
- A newly opened account has a pattern of rapid transactions or high payment volumes to potentially risky payees.
- Transactions show patterns of withdrawing funds immediately after deposit and in ways that make payments difficult to reverse, such as international bank transfers.
NYDFS’ Recent Guidance on Cybersecurity Risks Arising From GenAI
FinCEN is not alone in its increased concern about the threat GenAI poses to financial institutions. In October, NYDFS issued an industry letter on GenAI’s cybersecurity risks to financial institutions. Like FinCEN, NYDFS identified risks such as cybersecurity threats enhanced by GenAI and threats to consumers, such as elaborate social engineering scams enabled by the use of GenAI and deepfake technology.
Although the industry letter does not formally impose new requirements related to cybersecurity, it does detail what NYDFS expects from supervised financial institutions when they implement the existing requirements of New York’s cybersecurity regulation, 23 NYCRR Part 500, to guard against GenAI-related threats. For example, supervised institutions should:
- Take into account GenAI-related risks in cybersecurity risk assessments.
- Maintain strong third-party service provider and vendor management policies and procedures, especially when those third parties involve information systems and the use of GenAI.
- Implement robust information systems access controls to mitigate access to consumer information by malicious GenAI programs.
- Provide training for all institutional personnel on the risks posed by GenAI.
- Implement a risk monitoring process that can quickly identify new GenAI-related vulnerabilities.
- If an institution uses GenAI or relies on a product that uses GenAI, make sure that controls are in place to prevent threat actors from accessing the vast amounts of data maintained for the accurate functioning of the GenAI.
For more information about legal issues related to artificial intelligence, please visit Arnold & Porter’s Artificial Intelligence webpage. For questions about combatting GenAI-related fraud or using GenAI-enabled tools to combat fraud, or questions about the use of artificial intelligence in your institution’s compliance programs, please contact the authors or any of their colleagues in Arnold & Porter’s Financial Services or White Collar Defense & Investigations practice groups.
© Arnold & Porter Kaye Scholer LLP 2024 All Rights Reserved. This Blog post is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.