26 October 2024

NEW NATIONAL SECURITY MEMORANDUM: Biden Preps AI Warfare and Spying Plan


 


The Biden Administration’s National Security Memorandum on AI Explained

On October 24, 2024, the Biden administration released a National Security Memorandum (NSM) titled “Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence.” Writing this memorandum was a stated requirement from the administration’s October 2023 AI Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.

As the lengthy title suggests, this document covers a diverse set of issues. At nearly 40 pages long, this document is by far the most comprehensive articulation yet of United States national security strategy and policy toward artificial intelligence (AI). A closely related companion document, the Framework to Advance AI Governance and Risk Management in National Security, was published on the same day.

Q1: What aspects of AI is the NSM focused on?

A1: For most of the last decade, the AI and national security policy community has been focused on deep learning AI technology, which has been booming since 2012. Deep learning has powered critical AI-enabled applications such as face recognition, voice recognition, autonomous systems, and recommendation engines, each of which has resulted in significant military and intelligence applications. The 2024 AI NSM, unlike the AI executive order, mostly ignores the AI technologies developed and deployed in the 2012–2022 timeframe. Instead, the NSM is squarely concerned with frontier AI models, which exploded in importance after the release of ChatGPT by OpenAI in 2022.

While all frontier AI models—such as the ones that power OpenAI’s ChatGPT, Anthropic’s Claude, or Google’s Gemini—continue to be based on deep learning approaches, frontier models are different from earlier deep learning models in that they are highly capable across a much more diverse set of applications. The previous generation of AI systems, especially those based on supervised deep learning, tended to be much more application-specific.

The AI NSM officially defines frontier models as “a general-purpose AI system near the cutting-edge of performance, as measured by widely accepted publicly available benchmarks, or similar assessments of reasoning, science, and overall capabilities.” This is consistent with other recent Biden administration moves. Elizabeth Kelly, director of the U.S. AI Safety Institute (AISI), said in an interview at CSIS that AISI is specifically focused on advanced AI and especially frontier models.

Section 1 of the NSM articulates why the Biden administration views frontier AI technology as such a pressing national security priority:

“Recent innovations have spurred not only an increase in AI use throughout society, but also a paradigm shift within the AI field . . . This trend is most evident with the rise of large language models, but it extends to a broader class of increasingly general-purpose and computationally intensive systems. The United States Government must urgently consider how this current AI paradigm specifically could transform the national security mission.”

Q2: What is the historical precedent for a document like this?

A2: In his October 24 speech announcing the AI NSM, Jake Sullivan, the assistant to the president for national security affairs (APNSA), explicitly compared the current AI revolution to earlier transformative national security technologies such as nuclear and space. U.S. government officials told CSIS that some of the critical early U.S. national security strategy documents for those technologies served as a direct inspiration for the creation of the AI NSM. For example, NSC-68, published in 1950 at a critical moment in the early Cold War, recommended a massive buildup of nuclear and conventional arms in response to the Soviet Union’s nuclear program. This analogy is imperfect since the AI NSM is not advocating a massive arms buildup, but the comparison does helpfully illustrate that the Biden administration views the AI NSM as a landmark document articulating a comprehensive strategy towards a transformative technology.

Q3: Who is the intended audience for this document?

A3: In June 2024, Maher Bitar, deputy assistant to the president and a leader on the White House National Security Council staff, noted that the AI NSM would be “speaking to many audiences at once, but there will be a portion that will remain classified as well.”

Among the Biden administration’s “many audiences,” there are four key ones that it likely had in mind:

  • Audience 1—U.S. federal agencies and their staff: The AI NSM sets out U.S. national security policy toward frontier AI and provides specific taskings for many different federal agencies in executing that policy. Providing policy clarity and taskings for federal agencies was the primary stated objective for the AI NSM, as laid out in the October 2023 AI executive order.
     
  • Audience 2—U.S. AI companies: As APNSA Sullivan said, “Private companies are leading the development of AI, not the government.” For private industry, the AI NSM clarifies what the Biden administration sees as the proper roles of the public and private sectors in advancing U.S. national security interests, including what the U.S. government must do to support private sector AI leadership and what the government wants for and wants from the private sector in the name of national security.
     
  • Audience 3—U.S. allies: This is far from the first major policy move that the Biden administration has made at the intersection of AI and national security. For example, in October 2022, the Biden administration unveiled comprehensive new controls on exports to China of the semiconductor technologies that enable advanced AI systems. At the time, the written justification for the export control policy focused on the use of advanced AI chips by China’s military, especially related to weapons of mass destruction. However, as a standalone measure, this justification struck some allies as incomplete for such a dramatic reversal of 25 years of U.S. trade and technology policy toward China. With the AI NSM, a larger and mostly unstated (at least publicly) justification for the policythe critical strategic importance of frontier AI systemsis now clear. U.S. allies now have a canonical reference point for understanding why the United States sees leadership in frontier AI systems as critical and why the United States was willing to take extraordinary measures to preserve AI leadership. 

    Additionally, U.S. allies and partners are trying to understand what role they can play in the U.S.-led AI ecosystem. For example, the United States and the United Arab Emirates (UAE) struck a deal related to building significant AI-related data centers and energy infrastructure in that country. The need for a government-to-government deal was motivated by a private-sector agreement between Microsoft and G42. Both deals have attracted controversy among some U.S. national security leaders who questioned why the U.S. would encourage a strategic technology to spread abroad rather than remain in the United States. The NSM does not comment on the UAE deal specifically, but it does seek to reassure U.S. allies and partners that they stand to benefit from the U.S. strategy. Specifically, the NSM states, “The United States’ network of allies and partners confers significant advantages over competitors. Consistent with the 2022 National Security Strategy or any successor strategies, the United States Government must invest in and proactively enable the co-development and co-deployment of AI capabilities with select allies and partners.” 
     
  • Audience 4—U.S. adversaries and strategic competitors: Analysts in China and Russia will undoubtedly study the NSM closely. While China is never mentioned by name, it is the United States’ most formidable competitor for global AI leadership by far. Most of the document details how the U.S. intends to outcompete China, but some provisions might be seen as complimenting previous and potential future diplomatic overtures. For example, the Framework to Advance AI Governance and Risk Management in National Security, which is a companion document to the AI NSM and frequently referenced in it, includes a prohibition on using AI to “Remove a human in the loop for actions critical to informing and executing decisions by the President to initiate or terminate nuclear weapons employment.” This is a topic that was likely raised at the May 2024 U.S.-China AI Safety meeting in Geneva.

Q4: What are the primary objectives of the NSM, and how does it pursue them?

A4: The NSM lays out three high-level policy objectives for the U.S. national security community. This section will summarize each objective and highlight some (far from all) of the key agency taskings that are tied to these objectives.

Objective 1—Maintain U.S. leadership in the development of advanced AI systems: The AI NSM outlines a series of measures designed to ensure the United States retains its position as the global leader of the AI ecosystem.

  • AI Talent and Immigration: The AI NSM sees a larger and superior pool of AI talent as a critical U.S. strategic advantage. It focuses especially on taking actions to preserve and expand the United States’ strength in attracting leading AI talent from around the world. 

    The document states, “It is the policy of the United States Government that advancing the lawful ability of noncitizens highly skilled in AI and related fields to enter and work in the United States constitutes a national security priority.” To support this effort, the AI NSM directs federal agencies to, “use all available legal authorities to assist in attracting and rapidly bringing to the United States individuals with relevant technical expertise who would improve United States competitiveness in AI and related fields, such as semiconductor design and production.” 

    Additionally, the AI NSM directs the White House Council of Economic Advisers to conduct a study on the state of AI talent in the U.S. and abroad and a separate study on the “relative competitive advantage of the United States private sector AI ecosystem.” 
     
  • Energy and Infrastructure: APNSA Sullivan said, “One thing is for certain: If we don’t rapidly build out this [energy and data center] infrastructure in the next few years, adding tens or even hundreds of gigawatts of clean power to the grid, we will risk falling behind.” The entire U.S. electrical generation capacity is today only 1,250 gigawatts. Thus, if Sullivan’s more bullish “hundreds of gigawatts” scenario occurs, AI might represent as much as 25 percent of total U.S. electricity consumption. Such a massive expansion of infrastructure in such a short period of time has not occurred in the United States in many decades. 

    Without new budgetary legislation from Congress, the executive branch cannot do much to increase funding available for a massive AI infrastructure buildout. Unsurprisingly, the document is therefore focused on the parts of the issue that are nearer to executive branch competencies. 

    This includes identifying the barriers to rapid construction and attempting to mitigate or eliminate them. The document directs the White House Chief of Staff, the Department of Energy (DOE), and other relevant agencies to “coordinate efforts to streamline permitting, approvals, and incentives for the construction of AI-enabling infrastructure, as well as surrounding assets supporting the resilient operation of this infrastructure, such as clean energy generation, power transmission lines, and high-capacity fiber data links.” 

    The NSM’s use of the word “coordinate” is a tacit acknowledgment of the fact that much of the critical authority for budgeting and reforming regulations lies outside of the executive branch, with Congress and state and local governments. In this area, the White House can seek to lead and persuade, but it has limited ability to command. However, the White House clearly views this as a top political priority, as evidenced by the fact that this is tasked to the White House Chief of Staff, not merely the Department of Energy. 

     
  • Counterintelligence: White House leaders know that it would make little sense for the United States to spend tens or hundreds of billions of dollars developing frontier AI models if China can steal them in a trivially expensive espionage campaign. The AI NSM thus extends U.S. counterintelligence activities to the key players in the U.S. AI industry. APNSA Sullivan stated the inclusion of AI infrastructure and intellectual property among official counterintelligence priorities would mean “more resources and more personnel” devoted to countering adversaries’ theft, espionage, and disruption. 

    In the recent past, some leading U.S. AI companies have been the victims of devastating cyber-attacks, and not just from nation-states. For example, WIREDreported in 2022 that a group of cyber criminals breached Nvidia and stole “a significant amount of sensitive information about the designs of Nvidia graphics cards, source code for an Nvidia AI rendering system called DLSS, and the usernames and passwords of more than 71,000 Nvidia employees.” Nvidia claims to have since significantly upgraded its cybersecurity, though it is far from clear that the U.S. AI industry as a whole is prepared. The AI NSM will get the U.S. national security community involved in protecting commercial AI companies and securing their sensitive technology.

Objective 2—Accelerate adoption of frontier AI systems across U.S. national security agencies: APNSA Sullivan opened his remarks by expressing confidence about the current state of U.S. leadership in artificial intelligence technology. However, he also expressed concern that this leadership was not effectively being harnessed for National Security advantage, stating, “We could have the best team but lose because we didn’t put it on the field.” Accordingly, the AI NSM mandates that national security agencies “act decisively to enable the effective and responsible use of AI in furtherance of its national security mission.” Some of the key actions include

  • Directing agencies to reform their human capital and hiring practices to better attract and retain AI talent;
  • Directing agencies to reform their acquisition and contracting practices to make it easier for private sector AI companies to contribute to the national security mission;
  • Directing the Department of Defense (DOD) and the Intelligence Community (IC) to examine how existing policies and procedures related to existing legal obligations (e.g., privacy and civil liberties) can be revised to “enable the effective and responsible use of AI”; and
  • Directing federal agencies to examine how existing policies and procedures related to cybersecurity can likewise be revised to accelerate adoption (while without exacerbating cybersecurity risk).

While the White House deserves credit for correctly identifying the critical areas and tasking the agencies with tackling them. It should be noted that each of these areas has long been identified as key barriers to the adoption of AI, and agencies have struggled in the past to meaningfully reform. With or without an AI NSM, this is not an easy undertaking.

Objective 3—Develop robust governance frameworks to support U.S. national security: While the worldwide AI community uses the word “governance” to mean many different things, sometimes including many of the ideas covered by AI safety, the AI NSM addresses governance primarily in terms of who has authority to make decisions regarding the use of AI and what processes they use to make such decisions. To this end, the NSM tasked agencies with a wide range of governance actions.

Two especially noteworthy efforts include requiring nearly all national security agencies to designate a chief AI officer and directing the creation of an AI National Security Coordination Group consisting of the chief AI officers of the Department of State, DOD, Department of Justice, DOE, Department of Homeland Security, Office of Management and Budget, Office of the Director of National Intelligence, Central Intelligence Agency, Defense Intelligence Agency, National Security Agency, and National Geospatial-Intelligence Agency.

The AI NSM also commits the United States to work with international partners and institutions—such as the G7, Organisation for Economic Co-operation and Development, and the United Nations—to advance international AI governance.

Many portions of the NSM related to Objective 3 directly refer to the companion Framework to Advance AI Governance and Risk Management in National Security (which this article will henceforth refer to as the National Security AI Governance Framework). The Biden administration sees these documents as two parts of a whole when it comes to the U.S. national security strategy for AI. The governance framework will be addressed in more detail in Q6.

Q5: What does the AI NSM mean for the future of AI Safety and Security?

A5: The NSM has a significant focus on AI safety and security initiatives. At first glance, this may seem at odds with the NSM’s previously stated goal of accelerating the adoption and use of AI systems. However, APNSA Sullivan said the following about this apparent contradiction:

“Ensuring security and trustworthiness will actually enable us to move faster, not slow us down. Put simply, uncertainty breeds caution. When we lack confidence about safety and reliability, we’re slower to experiment, to adopt, to use new capabilities—and we just can’t afford to do that in today’s strategic landscape.”

In other words, the absence of clear and comprehensive safety approaches impedes the national security community’s ability to rapidly adopt frontier AI tools. There is an important additional mechanism whereby codifying AI safety, security, and governance procedures can help government agencies accelerate AI adoption that APNSA Sullivan did not mention. This is that clear guidance on prohibited AI use cases and procedures for getting approval for high-risk AI applications will help allay the concerns of government staff that their AI adoption efforts might be breaking regulations of which they are not even aware. In massive government bureaucracies, this career risk aversion is a common phenomenon.

The AI NSM’s safety-related sections mostly focus on the work of the U.S. AISI and thus provide the clearest articulation to date of the relationship between AI safety and U.S. national security interests. One senior Biden administration official went so far as to say that,

“The NSM serves as a formal charter for the AI Safety Institute in the Department of Commerce, which we have created to be the primary port of call for U.S. AI developers. They have already issued guidance on safe, secure, and trustworthy AI development and have secured voluntary agreements with companies to test new AI systems before they are released to the public.”

In terms of specific taskings, the AI NSM does the following:

  • Designates the U.S. AISI as the “primary point of contact” in government for private sector AI companies when it comes to AI testing and evaluation activities.
  • Directs the AISI to, within 180 days, “pursue voluntary preliminary testing of at least two frontier AI models prior to their public deployment or release to evaluate capabilities that might pose a threat to national security.”
  • Directs the AISI to, within 180 days, “issue guidance for AI developers on how to test, evaluate, and manage risks to safety, security, and trustworthiness arising from dual-use foundation models.”
  • Directs the AISI to begin robust collaboration with the AI Security Center at the National Security Agency to “develop the capability to perform rapid systematic classified testing of AI models’ capacity to detect, generate, and/or exacerbate offensive cyber threats”

Q6: What are the primary objectives of the National Security AI Governance Framework?

A6: The National Security AI Governance Framework outlines four key pillars federal agencies must use as a starting point for their AI governance decisions. The document also includes further taskings to agencies not included in the NSM. The stated goal of this framework is to “support and enable the U.S. Government to continue taking active steps to uphold human rights, civil rights, civil liberties, privacy, and safety; ensure that AI is used in a manner consistent with the President’s authority as commander-in-chief to decide when to order military operations in the nation’s defense; and ensure that military use of AI capabilities is accountable, including through such use during military operations within a responsible human chain of command and control.”

Regarding the last item, one government official told CSIS that the framework solidified the U.S. policy that countries are responsible for and commanders are accountable for the activities of their military and intelligence organizations—whether or not AI is playing an important role in those activities.

Of special note, whereas the AI NSM is focused on frontier AI systems, the National Security AI Governance Framework applies to all AI systems, which in the U.S. government context usually means systems that are based upon machine learning technology.

The framework’s four pillars are listed below, along with a non-exhaustive description of key provisions:

  • AI Use Restrictions: This section outlines applications of AI systems that are prohibited in addition to defining what AI use cases qualify as “high-impact.” With respect to autonomous and semiautonomous weapons systems, the document defers to the Department of Defense Directive 3000.09.
  • Minimum Risk Management Practices for High-Impact and Federal Personnel-Impacting AI Uses: This section lays out the minimum baseline safeguards agencies should put in place for high-impact AI uses. It includes a comprehensive list of risk management practices agencies must adhere to within 180 days of the framework’s release.
  • Cataloguing and Monitoring AI Use: This section sets the inventory, data management, and oversight requirements federal agencies must follow. It includes a lengthy set of Chief AI Officer skillsets and responsibilities. The AI NSM directed that “each covered agency shall have a Chief AI Officer who holds primary responsibility within that agency.”
  • Training and Accountability: This section tasks agencies with creating “standardized training requirements and guidelines” for officials who must interact with AI systems, in addition to updating their whistleblower protection policies for personnel who use AI systems in national security contexts.

One of the reasons for separating the National Security AI Governance Framework and the AI NSM is that the two documents have a separate process for being updated, with the AI Governance Framework being easier to amend.

Q7: Do the process requirements of the National Security AI Governance Framework apply universally?

A7: No. In addition to the framework’s restricted focus on “prohibited” and “high-risk” AI use cases, the Framework also creates a new waiver process whereby the chief AI officer of any federal agency can authorize bypassing risk management practices in the event that they, “would create an unacceptable impediment to critical agency operations or exceptionally grave damage to national security,” among other conditions.

The framework includes measures to ensure that these waivers are not used excessively and without good reason. For example, chief AI officers cannot delegate waiver authority, must review all issued waivers annually, and must “report to agency leadership immediately, and to the Department Head and APNSA within three days, upon granting or revoking any waiver.”

Despite these limitations on waiver usage, the existence of this waiver process in and of itself represents a significant elevation in power and authority for every chief AI officer throughout the U.S. National Security Community.

Q8: What might the upcoming U.S. presidential election mean for the implementation of the AI NSM?

A8: The NSM is a landmark national security strategy document and outlines an ambitious and comprehensive vision for AI’s role in national security. However, the degree to which it is implemented will be affected by the election simply because—regardless of the electoral outcome—a new president will be in office when many of the tasking deadlines occur.

While there is strong reason to believe that a Kamala Harris administration would continue most of the Biden administration’s major AI policy efforts, a second Donald Trump administration could represent a dramatic departure. For example, the Republican Party policy platform explicitly endorses repealing the Biden administration’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, which mandated the creation of the NSM.

However, there are provisions of the AI NSM that are consistent with policy positions taken by former president Trump. In September 2024, for example, Trump said that he wanted to “quickly double our electric capacity, which will be needed to compete with China and other countries on artificial intelligence.” A hypothetical Trump administration may, therefore, seek to cancel only specific provisions of the AI NSM rather than the document as a whole.

Gregory C. Allen is the director of the Wadhwani AI Center at the Center for Strategic and International Studies in Washington, D.C. Isaac Goldston is a research associate with the Wadhwani AI Center.

The authors would like to thank Samantha Gonzalez for her research support.




No comments: