The Trump AI Executive Order: Takeaways for Developers and Deployers
In his first week back in office, President Trump both rescinded President Biden’s artificial intelligence (AI) Executive Order (EO; see our analysis of the Biden AI EO) and issued a short AI Executive Order of his own. The new EO declares, “It is the policy of the United States to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” After announcing this policy, the EO requires three sets of actions:
- The Assistant to the President for Science and Technology, the Special Advisor for AI and Crypto, and the National Security Advisor must review all actions taken by the Biden administration pursuant to the Biden AI EO for inconsistencies with, or barriers presented to, this policy. Agency heads then must act, consistent with applicable law, to remove or propose removing the inconsistency or barrier. To the extent the removal cannot be effected immediately, the Trump AI EO directs the Assistant to the President for Science and Technology and the relevant agency head to grant appropriate exemptions, consistent with applicable law.
- By March 24, 2025, the Director of the Office of Management and Budget (OMB), in coordination with the Assistant to the President for Science and Technology, must revise OMB memoranda on federal AI governance (M-24-10) and federal AI acquisition (M-24-18) to the extent they are inconsistent with this policy.
- By July 22, 2025, the Assistant to the President for Science and Technology, the Special Advisor for AI and Crypto, and the National Security Advisor, in coordination with the Assistant to the President for Economic Policy, the Assistant to the President for Domestic Policy, the OMB Director, and relevant agency heads, must deliver to the president an action plan for achieving this policy.
Implications for AI Developers
The Trump administration wants to unleash AI innovation — to the extent it has been held back by regulatory concerns. AI developers generally should expect less federal regulatory oversight going forward. In describing its purpose, the Trump AI EO includes the statement, “To maintain [U.S.] leadership, we must develop AI systems that are free from ideological bias or engineered social agendas.” Consistent with that view, developers selling to the U.S. government or creating AI systems for human resources need to consider whether their anti-bias guardrails might (1) include any Diversity, Equity, Inclusion, and Accessibility (DEIA) elements or constitute prohibited DEIA-related activities under the new Ending Illegal Discrimination and Restoring Merit-Based Opportunity Executive Order, or (2) otherwise be seen as potentially contributing to discriminatory employment practices highlighted by that Executive Order. (For more on that Executive Order, please see our Advisory.)
While the federal government apparently will ease its supervision of AI developers, developers still must contend with state and foreign regulation. In the absence of federal AI regulatory legislation, states have begun to act on their own. While many state legislators see the value in uniform national laws to govern this emerging technology, they are unwilling to wait to address the AI risks they fear. Indeed, some states like Colorado and Utah already have adopted significant AI governance laws, and California adopted over a dozen new AI laws last year. As a result, businesses are beginning to confront a patchwork of regulatory requirements in the United States, and this problem is poised to grow rapidly. To make matters more difficult for developers looking to a global market, the European Union last year adopted its Artificial Intelligence Act; South Korea just adopted its Act on Artificial Intelligence Development and Establishing a Foundation of Trust; and other jurisdictions are weighing new regulations. (China also has a variety of regulations.)
Moreover, customers subject to these and other regulations will continue to push AI developers to create systems that facilitate the customers’ own compliance.
While each developer will weigh these factors differently, many may find it prudent to stay the course — at least for now.
Implications for AI Deployers
For AI deployers, the choice is clearer: they generally should stay the course on their AI governance efforts. The Trump AI EO aims to spur innovation by providing relief for developers, not deployers. It does not alter any of the federal, state, and foreign laws with which they must comply — let alone any of the nonlegal risks that must be managed when deploying AI systems.
As exceptions to this general guidance:
- Government contractors need to consider the implications of the new Ending Illegal Discrimination and Restoring Merit-Based Opportunity Executive Order for the AI systems they are deploying and their AI governance efforts. (For more on that Executive Order, please see our Advisory.)
- Deployers of AI systems for human resources need to pay attention (if they have not already) to the potential for reverse discrimination, as well as for discrimination against historically underrepresented groups.
Looking Forward
The Trump AI EO lays the groundwork for additional changes in federal AI policy. Some of these changes primarily will affect the government’s own AI procurement and deployment (and only indirectly the private sector). Others will have greater implications for corporate development and deployment of AI. For advice on navigating these changes and other AI-related issues, please contact the authors or our interdisciplinary AI team.
© Arnold & Porter Kaye Scholer LLP 2025 All Rights Reserved. This Advisory is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.