Government Agencies Take Aim at AI Risks: Understanding the Implications of President Biden’s AI Executive Order
President Biden’s executive order (EO) on artificial intelligence (AI) directs government agencies to manage AI systems interfacing with critical-infrastructure sectors, national security systems, and other important government information systems. Under the EO, the Department of Homeland Security (DHS) will coordinate interagency efforts to assess vulnerabilities in critical-infrastructure and financial sectors, in addition to developing guidelines for mitigating AI-related risks. The Department of Defense (DOD) and the DHS, respectively, will develop pilot AI projects to bolster the cyber-defense capability of the U.S. government national security and non-national security information systems.
The EO's critical-infrastructure provisions focus on potential threats from AI systems. They require regulators to develop guidelines that could bind owners and operators of critical infrastructure, in addition to mandating regular assessments of AI's risks to critical-infrastructure sectors and financial institutions. Within 180 days after the EO becomes effective, the DHS, sector risk management agencies (SRMAs), and the Department of Commerce must address AI-related risks by incorporating into safety and security guidelines for critical infrastructure, as appropriate, the AI Risk Management Framework (NIST AI 100-1) issued by the National Institute of Standards and Technology (NIST) as well as other appropriate security guidance.
- Before the EO was issued, the DHS had been developing an AI guidance to inform critical-infrastructure companies on how to safely incorporate AI into their operations. After the EO, the department has stated that it will also work through the Cybersecurity and Infrastructure Security Agency to assess "potential risks related to the use of AI in critical infrastructure sectors."
- Within 240 days after the critical-infrastructure guidelines on AI are completed, President Biden's National Security Advisor and the Director of the Office of Management and Budget must take regulatory or other actions to make the guidelines mandatory.
Within 90 days of the EO, and annually thereafter, SRMAs and other relevant agencies must coordinate with the Cybersecurity and Infrastructure Security Agency to assess the potential risk of AI systems used in critical-infrastructure sectors, including ways to mitigate vulnerabilities to critical failures, physical attacks, and cyber attacks. The first reports on the AI-risk assessments are due to the DHS within 90 days after the EO becomes effective.
Within 150 days of the EO, the Secretary of the Treasury must issue a public report on how financial institutions can best manage cybersecurity risks stemming from AI systems. The EO's mandate to provide AI-related guidance to financial institutions follows earlier actions by independent agencies to strengthen oversight of AI in the financial industry. See our prior Advisory for more information.
In addition to defending against AI-related risks in the critical-infrastructure sector, the EO directs agencies to exploit AI's potential to enhance the defensive capabilities of critical federal government systems. The EO charges the DOD and DHS to develop and test such AI systems. The DOD will, within 180 days, develop plans for, conduct, and complete an operational pilot project to test and deploy AI capabilities to protect national security systems used for intelligence activities, cryptologic activities, command and control, weapons control, and other fulfillment of defense missions. The DHS will, within 180 days, develop such plans for, conduct, and complete an operational AI pilot project for non-national security systems used within the federal government.
- The EO suggests using AI capabilities, such as large-language models, to find and mitigate system weaknesses.
- Within 270 days after the date of the EO, the DOD and DHS are to submit a report of their results to President Biden's National Security Advisor, detailing the identified and mitigated vulnerabilities, as well as lessons learned from the exercises.
Directive to Develop a National Security Memorandum on AI
The EO requires the National Security Advisor and the White House Deputy Chief of Staff for Policy to coordinate an interagency national security memorandum on the governance of AI used as a component of national security systems and for military and intelligence purposes. The memorandum will guide how the DOD, the Department of State, and the Intelligence Community (IC) address the national security risks and potential benefits of AI. The White House has emphasized that the goal of the document is to guide the DOD and the IC to "use AI safely, ethically, effectively." The EO directs that the memorandum shall guide the continued adoption of AI capabilities to advance the United States national security mission and direct continued actions to address potential use of AI systems by adversaries and other foreign actors against United States interests.
* Kyung Liu-Katz contributed to this Advisory. Kyung is a graduate of William and Mary Law School and is employed at Arnold & Porter's Washington, D.C. office. Kyung is not admitted to the practice of law.
© Arnold & Porter Kaye Scholer LLP 2023 All Rights Reserved. This Advisory is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.