AI: A View From Congress and the Executive Branch
Since OpenAI’s ChatGPT™ entered the American zeitgeist in November 2022, policymakers have tried to identify the risks and social implications of generative artificial intelligence (AI) and develop the appropriate regulatory response. Alongside OpenAI, other industry players are embracing more active roles in guiding federal AI policy discussions, including Microsoft, which detailed its position in a “Blueprint for the Future” of AI governance. This Advisory provides insight into the political dynamics surrounding federal AI policy, which are critical for firms to consider when advocating for their AI-related interests.
State of Play
The Biden Administration
The White House Office of Science and Technology Policy (OSTP) is developing a National AI Strategy to “manage AI risks and harness AI opportunities.” In the same vein as the National Cyber Strategy, the National AI Strategy will coordinate interagency approaches to remediating AI risks while promoting technological development. To inform this strategy, the administration has issued a series of requests for information (RFI), including a May 23 RFI to identify national priorities for a variety of AI-related issues, including algorithmic discrimination, privacy rights, national security, and adoption by government, among others. This builds on a series of more tailored RFIs, including on the use of AI for worker surveillance and AI accountability policy, among others.
The administration has laid the groundwork for its National AI Strategy through several earlier policy and strategy documents:
- In May 2023, the administration published an updated National AI R&D Strategic Plan, which provides a roadmap for federal AI investments. The plan emphasizes the need to study the impact of AI on the workforce to facilitate adaptation, including through human-AI collaboration, and developing interagency and public-private partnerships for the advancement of AI.
- In February 2023, the National Institute of Standards and Technology (NIST) issued version 1.0 of its AI Risk Management Framework (RMF) along with a playbook for implementation. The RMF, which is both voluntary and “law- and regulation-agnostic,” has been heralded globally as valuable guidance for organizations seeking to contain the risks of AI development, deployment, and use. (See our related Advisory here.)
- In October 2022, the White House released the Blueprint for an AI Bill of Rights. As we explained in a prior Advisory, the blueprint outlines a set of principles to promote AI safety and transparency while protecting consumers from discrimination and privacy violations. Although technically nonbinding, the principles have steered administration policy since their release.
In these, among other documents, the administration has prioritized development of “trustworthy” AI that is transparent, unbiased, and respectful of data privacy and has formulated set of principles that will underpin future AI policy decisions.
In early May, the White House also hosted a stakeholder roundtable and encouraged companies to commit to a voluntary code of conduct.
Recent Legislative Activity
For its part, Congress has become quite interested in AI governance. The Senate Judiciary Subcommittee on Privacy, Technology, and the Law began discussing AI risks and exploring regulatory options during a May 16 hearing on “Oversight of A.I.: Rules for Artificial Intelligence.” Subcommittee Chair Richard Blumenthal (D-CT) argued Congress “has the opportunity to do now what it didn’t do for social media, and create sensible safeguards,” including: (1) transparency through testing systems before deployment; (2) requiring the disclosure of known risks; (3) encouraging independent analysis; (4) instituting limitations on use, especially in commercial invasions of privacy; and (5) creating accountability and liability schemes to force companies to internalize risks. Full committee Chair Dick Durbin (D-IL) suggested the need for a new agency dedicated to overseeing AI developments that would regulate these safeguards. As predicted in Chair Durbin’s opening remarks, there was bipartisan consensus on the creation of a new agency to regulate AI during the hearing; full committee Ranking Member Lindsey Graham (R-SC) also supported the idea of creating an agency “to regulate the most transformative technology ever” that would issue licenses for the deployment of AI tools.
Other key points of discussion included the scope of future AI regulation, the need for a federal data privacy standard, and the urgency of promoting reliable and trustworthy AI. Committee members agreed that future regulations should consider the impact of a model’s inputs on the content it produces, and that it should protect privacy and intellectual property while combatting bias and disinformation. Sen. Marsha Blackburn (R-TN) expressed particular interest in the implications of AI-generated art and music for artist compensation. She urged greater attention to copyright protections by generative AI developers and the federal government. The Privacy, Technology, and the Law Subcommittee will explore these concerns in greater depth during a June 7 hearing on the implications of AI for intellectual property.
The hearing coincided with a Senate Homeland Security and Government Affairs Committee hearing to examine the role of AI in the federal government. Much of the hearing’s discussion focused on workforce-related concerns, including job displacement. These hearings follow broader inquiries on the policy issues associated with AI, including Senate and House hearings on the implications of AI for cybersecurity and its associated risks and opportunities. As the public and private use cases for AI proliferate, we anticipate Congress will continue to hold hearings on the subject.
While the Senate and House hold these hearings to learn more about the technology, its risks and benefits, and potential approaches to regulation, members have been authoring various proposals:
- In April 2023, Senate Majority Leader Chuck Schumer (D-NY) announced a legislative framework to protect consumers while promoting AI innovation. As explained in our discussion of this still-embryonic framework, it remains “bare scaffolding” while Leader Schumer and his colleagues decide what to erect.
- In response to a Republican National Committee ad that used AI-generated imagery, Rep. Yvette Clarke (D-NY) and Sen. Amy Klobuchar (D-MN) introduced the REAL Political Advertisements Act (H.R. 3044/S. 1596) that would expand Federal Election Campaign Act disclosures to include AI-generated content in campaign ads.
- Rep. Ritchie Torres (D-NY) is reportedly set to introduce the AI Disclosure Act of 2023, which would require similar disclaimers for all AI-generated content.
- Reps. Ted Lieu (D-CA), Ken Buck (R-CO), Don Beyer (D-VA), and Sen. Ed Markey (D-MA) introduced legislation that would block the launch of a nuclear weapon by an AI system (S. 1394/H.R. 2894).
- Reps. Jay Obernolte (R-CA) and Jimmy Panetta (D-CA) introduced the Artificial Intelligence for National Security Act (H.R. 1718), which would clarify the Department of Defense’s ability to utilize AI in a defensive context.
- Senate Homeland Security and Governmental Affairs Committee Chair Gary Peters (D-MI) introduced legislation (S. 1564) with Sen. Mike Braun (R-IN) that would create a program to train federal officials on the benefits and costs of AI in the workforce.
- Sens. Peter Welch (D-VT) and Michael Bennet (D-CO) introduced the Digital Platform Commission Act (S. 1671) to establish a new federal commission to provide comprehensive regulation to protect consumers, promote competition, and defend the public interest. The commission would develop and enforce rules for the AI and social media sectors.
- In the 117th Congress, current House Energy and Commerce Committee Chair Cathy McMorris Rodgers (R-WA) and Ranking Member Frank Pallone (D-NJ) introduced the American Data Privacy Protection Act (H.R. 8152; ADPPA). Then-Ranking Member of the Senate Committee on Commerce, Science, & Transportation Roger Wicker (R-MS) also backed the legislation. In addition to outlining the first federal standard for the protection of consumer data and requiring firms to establish data protection measures, the ADPPA would have prohibited algorithmic discrimination against protected classes, required companies to conduct annual algorithmic impact assessments, and mandated mitigation of disparate impacts and other harms from AI use. The ADPPA remains a bipartisan priority for the House Energy and Commerce Committee, and its leaders are expected to introduce an updated version in the coming weeks. Recent press reports and remarks by committee leaders suggest the reintroduced ADPPA may feature stricter federal preemption of state law, tighter provisions to address the data brokerage industry, and clarification regarding the Federal Trade Commission’s (FTC) enforcement of the legislation, among other changes. Concerns from California state officials that the ADPPA would blunt California’s privacy protections appear to have prevented former Speaker Nancy Pelosi (D-CA) from bringing the legislation to the House floor during the 117th Congress, but are unlikely to sway Speaker Kevin McCarthy (R-CA).
- Also in the 117th Congress, Sens. Ron Wyden (D-OR) and Cory Booker (D-NJ), with Rep. Yvette Clarke (D-NY), introduced the Algorithmic Accountability Act (H.R. 6580/S. 3572) which would require firms to assess and disclose information about bias, accuracy, and a range of additional factors associated with automated decision-making. The sponsors plan to reintroduce this legislation in the 118th Congress.
Despite this activity, no comprehensive legislation regulating AI is close to adoption. Members of Congress remain in learning mode, and it likely will take several years of education and negotiation before sufficient consensus emerges for a broad bill to pass. In the meantime, concerns over AI may spur efforts to pass a federal data privacy bill like the ADPPA.
In the absence of congressional action, federal regulators have utilized their existing authorities to address alleged misuses of AI. For instance, the heads of the FTC, Equal Employment Opportunity Commission, Consumer Financial Protection Bureau, and Justice Department Civil Rights Division recently issued a joint statement affirming their commitment to combat discrimination by AI and other automated systems (see our recent blog post). And the FTC launched a proceeding on “commercial surveillance” last summer that could yield wide-ranging rules on algorithms, privacy, and data security by the end of 2024 (see our Advisory for more information).
Coordination with the EU?
The U.S.-EU Trade and Technology Council — a forum for the United States and EU to coordinate and promote trade, investment, development, and deployment of and in emerging technologies — published a joint statement on May 31. The statement reflects the desire on both sides of the Atlantic for cooperation on AI technologies as outlined in the December 2022 Joint Roadmap on “Evaluation and Measurement Tools for Trustworthy AI and Risk Management,” which created expert groups to focus on AI terminology and taxonomy; coordinating U.S.-EU AI standards and tools; and monitoring AI risks.
- The statement reaffirms the parties’ commitment to “a risk-based approach to [artificial intelligence] to advance trustworthy and responsible AI technologies.”
- The parties are developing a voluntary code of conduct embodying standards for transparency, risk audits, and other technical details for companies developing AI technologies. The United States and the EU hope to present the code as a joint proposal to the G7 in the fall.
- The TTC also aligned on definitions for 65 AI terms, in an effort to facilitate shared technical standards and potentially harmonization of regulatory approaches.
But harmonizing approaches between Washington and Brussels may be a bridge too far. The EU is advancing the AI Act, a far-reaching legislative proposal to regulate AI (see our first and second Advisories on the AI Act). European officials anticipate adopting this legislation late this year or early next year.
Although a voluntary code of conduct would give companies more guidance on how AI technologies may be treated in the United States and the European Union, it is unclear whether U.S. officials or companies will sign on to a code that incorporates the more-prescriptive aspects of the AI Act.
Conclusion
With bipartisan concerns about AI’s risks rising, both the Biden Administration and Congress will deepen their plunges into AI policymaking. By participating in this process, companies can try to steer regulation away from unnecessary interference in their operations. At the same time, businesses face real regulatory and litigation risks under existing laws, so they should consider how to manage those risks comprehensively.
© Arnold & Porter Kaye Scholer LLP 2023 All Rights Reserved. This Advisory is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.