Introduction
Artificial Intelligence (AI) is an object of serious policy interest. Unlike other policy challenges on Americans’ plates, however, AI is lucky to have thus far avoided significant politicization. The dispassion is to its benefit. Indeed, a degradation of general political discourse hides a surprising fact: the first Trump and Biden administrations exhibited significant continuity in their AI policymaking. As Matteo Wong observed days before the 2024 presidential election, Vice President Harris and President-elect Trump shepherded AI policies through the White House with sufficient overlap to make adversarial rhetoric seem politically contingent.
This policy brief anticipates AI policymaking under Trump 2.0 by assessing initiatives under Trump 1.0 and the Biden administration. This is valuable given the non-consecutive nature of Trump’s terms, though it is also necessary: there is little certainty about the Trump 2.0 AI policy agenda to be gleaned from the 2024 presidential campaign short of a promise to repeal Biden’s October 2023 AI Executive Order (EO). (Adding to the uncertainty, the Trump campaign distanced itself from private-sector drafts of a new EO for defense-oriented AI “Manhattan Projects.”)
It draws three conclusions: (1) There are significant continuities across Trump 1.0 and Biden that should not be lost amid (real) divergences; (2) The second Trump administration will find itself deciding the fate of policies that were continuations of its own; (3) Open-source, potentially hybrid techniques in automated reasoning and an AI industry downturn are two, potential industry curveballs that threaten to re-frame the US government’s AI policymaking agenda during Trump 2.0.
Trump 1.0 AI Policymaking
Among the earliest references to AI in the first Trump administration was in the December 2017 National Security Strategy (NSS). The NSS prioritized “emerging technologies critical to economic growth and security, such as…advanced computing technologies, and artificial intelligence.” It also highlighted the growing role of AI in information statecraft.
Two EOs are particularly important. The February 2019 EO on Maintaining American Leadership in Artificial Intelligence aimed to protect “American technology, economic and national security, civil liberties, privacy, and American values and enhancing international and industry collaboration with foreign partners and allies” (S1). It promotes “sustained investment in AI R&D in collaboration with industry, academia, international partners and allies, and other non-Federal entities to generate technological breakthroughs in AI and related technologies…” (S2(a)). Importantly, this EO establishes the American Artificial Intelligence Initiative.
Furthermore, a December 2020 EO on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government tasked federal agencies with adopting guidelines and principles in their uses of AI for the sake of public trust and benefit (S1) (exempting national security applications, common commercial products, and R&D activities (S9(d)).
US government AI policymaking began meeting the technological moment in earnest during this period. The National Artificial Intelligence Initiative Act of 2020 is responsible for the National Artificial Intelligence Initiative. The Initiative’s purposes include continued American AI R&D leadership, world-class development of trustworthy AI in the public and private sectors, preparation of the current and future workforce for AI, and the coordination of existing federal AI activities. The law mandates that the Director of the White House Office of Science and Technology Policy (OSTP) establish the National Artificial Intelligence Initiative Office (NAIIO), tasked with executing the Initiative’s purposes. Just as importantly, it authorizes the establishment of “a network of interdisciplinary artificial intelligence research institutes” that are sector– and application-focused, devoted to a “cross-cutting challenge” for AI, and can translate research into ecosystems, applications, or products (among related objectives).
National AI Research Institutes were established through the National Science Foundation (NSF). The Federal AI R&D Interagency Working Group supports the work of the Institutes. This Group is overseen by two bodies: the NAIIO, which the OSTP launched in January 2021, and the Subcommittee on Machine Learning and Artificial Intelligence (established in 2016 under President Obama).
As Program Director James Donlon explains, the AI Institutes serve the nine strategic objectives in the US’s AI Research and Development Strategic Plan (which began under the Obama administration in 2016, updated in 2019). They are directed to pursue a “use-inspired research framework” and embrace “high-risk, high-reward projects.”
Year | Major Initiatives | Administration | Status | Trump 2.0 Expectation |
2019 | EO on Maintaining American Leadership in AI | Trump | Active | Likely Updated |
2019 | National AI R&D Strategic Plan (2019 Update) | Trump | Inactive (Updated in 2023) | N/A (See, 2023 Update) |
2020 | EO on Promoting the Use of Trustworthy AI in the Federal Government | Trump | Active; Partially Superseded in October 2023 EO | Likely Updated |
2020 | National Artificial Intelligence Initiative Act | Trump | Active | Likely Preserved and Mandates Expanded |
2021 | AI.gov | Biden | Active | Likely Preserved |
2022 | CHIPS and Science Act | Biden | Active | Likely Preserved; Funding Conditions Likely Modified |
2022 | Blueprint for an AI Bill of Rights | Biden | Non-Binding | Likely Rolled Back; Specific Tenets May Be Adapted |
2023 | National AI R&D Strategic Plan (2023 Update) | Biden | Active | Likely Reaffirmed and Updated |
2023 | EO on Safe, Secure, and Trustworthy Development and Use of AI | Biden | Active | Likely Repealed; Specific Tenets Likely Adapted |
Biden AI Policymaking
The Biden administration directly expanded upon several of the Trump-era initiatives. In May 2021, the OSTP launched AI.gov, a website designed to increase public awareness of the federal government’s work related to the National AI Initiative and its efforts to advance “the design, development, and responsible use of trustworthy artificial intelligence (AI).”
The Biden administration’s October 2022 NSS intensified the Trump-era focus on AI’s security impacts, noting that AI is among those “emerging technologies” that “transform warfare” as well as being among the “foundational technologies of the 21st century…” It notably emphasizes collaboration with “like-minded nations” in technological ‘co-development.’ While the rhetorical bent in Biden’s NSS regarding international collaboration slightly diverges from Trump’s comparatively greater focus on allied burden-sharing, the October 2022 NSS’s emphasis on technological ‘co-development’ finds significant continuity with Trump’s February 2019 EO.
More assertively, in October 2022 the OSTP published the Blueprint for an AI Bill of Rights. While not legally binding, the Blueprint laid out five principles of responsible AI development: (1) Safe and Effective Systems; (2) Algorithmic Discrimination Protections; (3) Data Privacy; (4) Notice and Explanation; (5) Human Alternatives, Consideration, and Fallback.
The priority given to principles like (2) is a likely disjuncture between the Biden and Trump administrations (past and future). Still, as Wong notes, Trump’s February 2019 EO did emphasize applications of AI that “protect civil liberties, privacy, and American values” (S1(d)).
Nevertheless, the Blueprint set the tone for the extensive October 2023 EO on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. In the interests of security and public trust, the EO directs federal agencies in the management of dual-use foundation models, the implementation of testing protocols for high-risk models (specified according to Floating-point Operations, or FLOP), protecting civil rights, and promoting transparency throughout the lifecycle of a model. One year after its signing, its milestones include the US AI Safety Institute (AISI), NIST’s Generative AI Risk Management Framework, and the first-ever National Security Memorandum on AI.
The October 2023 EO is chockful of directives. Notably, it mandates the Director of the Office of Management and Budget to specify guidance related to federal agencies’ designation of a Chief AI Officer tasked in part with carrying out the December 2020 EO’s responsibilities in S8(c) (see, S10.1(b)(i)). It also instructs the Director to direct agencies’ reporting on AI use-case risks “as appropriate” and to “update or replace the guidance originally established in section 5” of the same December 2020 EO (S10.1(e)), thereby partially updating Trump’s EO.
It also invokes the Defense Production Act (DPA) to mandate that companies test their dual-use foundation models of sufficient computational complexity via red-teaming and report their results to the Commerce Department (S4.2). Trump 2.0 is almost certain to axe this mandate, seeing it as overreach.
The EO further calls for the launch of a pilot program implementing the National AI Research Resource (NAIRR) to make “computational, data, model, and training resources…available to the research community…” (S5.2(a)(i)). In January 2024, the NSF launched the NAIRR pilot, explaining that it aims to put forward a “proof of concept” that eventually leads to a fully funded program. The NAIRR pilot boasts both government and non-government partners.
Additionally, the Biden administration oversaw the expansion of the National AI Research Institutes. Beginning with seven in 2020 under then-President Trump’s first term, the network expanded in 2021 with the introduction of eleven new Institutes, followed by an additional seven in 2023. The October 2023 EO calls for “at least four new” Institutes within 540 days of its signing (S5.2(a)(iii)). Since then, two new Institutes were established in September 2024. A Supplement to the President’s FY 2024 Budget notes that, as of 2023, the total collaborative investment in these Institutes is $500 billion.
Legislatively, the CHIPS and Science Act of 2022 aims to shore up American semiconductor manufacturing capacity and scientific R&D investment and commercialization support. Due to later Congressional negotiations, however, some federal agencies’ basic research funding levels have fallen short of initial CHIPS expectations. Post-election, the outgoing Biden administration seeks to hurriedly complete deals with firms that allow it to disburse allocated CHIP funds, as Republican legislators mull removing environmental and social priorities from funding conditions in 2025. The law itself is unlikely to be repealed.
Expectations for Trump 2.0
What is striking about AI policymaking during the (first) Trump and Biden administrations is how many of the initiatives undertaken are either continuities or immune to obvious partisan sorting. The significance of AI to US national security, the National AI Initiative and its stewardship by the NAIIO, a promotion of trustworthy AI in public and private adoption, collaboration between federal agencies and non-federal entities in R&D, and the expansion of National AI Research Institutes are direct extensions of Trump-era policies and laws. Initiatives like the NAIRR pilot that are not formal continuations of previously established programs are not obvious disjunctures, either.
This does not indicate a lack of substantive disagreement between administrations. Alex Krasodomski suggests that the goal of American AI dominance was effectively baked into either a Harris or Trump administration—the difference would lie in the means to achieve it. Indeed, the scope and priority of collaboration with international partners and allies on AI R&D and other joint efforts, like evaluation standards-setting, were comparatively more emphasized during Biden’s term. The second Trump administration is likely to err inwards. This tracks a perception that the security implications of AI systems have risen in importance relative to their safety in Trump’s camp. Vice President Harris’ attendance at the Global Summit on AI Safety signaled a commitment to an expansive notion of AI safety—and its prioritization—that may be narrowed or chucked aside in Trump’s second term. The AISI is therefore a possible casualty (but Congressional support may prevent this outcome).
More prominently, the October 2023 AI EO will be in a precarious state in the next Trump White House. Both Trump and the Republican National Committee 2024 platform vowed to repeal the EO, couching this commitment in free speech discourse. Trump promised to do so “on day one.”
A repeal of the EO has two immediate implications. First, as Divyansh Kaushik comments, agencies would go through a rulemaking process to undo final rules already implemented. Second, revoking the EO means severing the agencies’ information-gathering capacities enabled by it, thereby hindering future regulatory and legislative efforts. Critically, however, a repeal of the EO may leave some of its accomplishments intact given that they have been carried out by individual federal agencies.
The oddity is that the second Trump administration will find itself with key policies that are either continuations of the first or of the same spirit. What remains to be seen, then, is whether political deterioration will lead Trump to roll back his first administration’s AI policies, beyond what one reasonably expects from this policy analysis.
This is tantamount to asking: How much was President Trump willing to delegate AI agenda-setting in his first term—thereby separating it from unrelated political controversies—and how sturdy would this wall of separation be in his second term? Federal policymaking was likely substantially delegated during Trump’s first term, in both agenda-setting and implementation (consider how, for example, the NAIIO was established on January 12, 2021—six days after the January 6 insurrection). Should delegation of agenda-setting persist under Trump 2.0, leadership personnel selection for Commerce, NSF, OSTP, and the like will be critical (as of writing, nominees are unannounced).
Finally, Trump 2.0 could resurrect the use of ‘Schedule F’ to align the federal workforce (either wholesale or by individual agency) with perceptions of loyalty, hindering policy implementation. This is a significant risk for policymaking. It is not certain, as such a resurrection could be relatively targeted, but it could serve as the retrospective flashpoint on the robustness of Executive AI policymaking.
In perspective, it is reasonable to therefore expect the following under Trump 2.0: ai.gov will be preserved; the National AI R&D Strategic Plan will be reaffirmed and updated; the mandates of the National Artificial Intelligence Initiative Act will be expanded; the CHIPS Act will be preserved, though funding conditions will be modified by Congress; the Blueprint for an AI Bill of Rights’ emphasis on algorithmic discrimination will be eschewed; the October 2023 AI EO will be repealed, but specific tenets are likely to be adapted elsewhere; finally, the 2019 and 2020 AI EOs may be built upon and updated to reflect the new administration’s priorities.
AI Curveballs During Trump 2.0
AI policymaking from 2017-2024 tracks an industry-specific expectation in the continued successes of Machine (and specifically Deep) Learning models, particularly insofar as models are trained on scaled-up training datasets, requiring greater computing power and energy. That said, Trump 2.0 may face (at least) two curveballs that re-frame the policy agenda.
First, open-source, potentially hybrid efforts to achieve models capable of more robust abstraction and generalization are slowly coming to fruition. Policymakers should keep their eye on the ARC-AGI Prize, a competition that uniquely tests models’ abilities to acquire novel skills and apply them appropriately (rather than models that rely on expansive distributions of data that restrict their ‘reasoning’ abilities to human-generated reasoning examples). Even if ARC-AGI is not conquered, hosts François Chollet and Mike Knoop expect to release the leading approaches into the public domain in December 2024—mere weeks before President-elect Trump takes office.
Policymakers should not expect radical innovations but should take heed: these attempts may render some of the tenets of AI policymaking under the current Biden administration ineffective even if they were not repealed by the next (e.g., using FLOP as a “crude proxy” for capabilities, evaluation standards, etc.). Both administrations have exhibited sensitivity to industry trends, despite differing on how closely industry actors and federal agencies should associate. This will persist under Trump 2.0. Research currently flying under the policymaking radar may thus end up re-framing the agenda.
Second, an underappreciated risk for Trump 2.0 is an AI industry downturn. The industry is fighting an uphill battle to produce profitable returns on Generative AI. Total capital spending among Microsoft, Meta, Amazon, and Google is forecasted to reach $209 billion in 2024, driven up in part by chips and data center infrastructure spending. A fragile dynamic is emerging wherein major tech firms are insistent on the promise of increased short-term spending balance against investors’ anxieties. Talk of an AI bubble is commonplace. Opinions vary, to be sure, overlapping with personal stances on the technical trajectories of Machine Learning models. That said, a period of euphoria benefitting the AI industry during both the first Trump and Biden administrations may give way to chilly re-assessments in the next four years. Neither mere continuity nor political expediency would alone inoculate US AI policies against this outcome.
*image credit: Reuters.