Highlights From the Biden Administration Executive Order on AI


On October 30, 2023, President Biden signed the 53-page Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence that significantly advances the United States' policy framework regarding artificial intelligence (AI). This directive builds upon the Biden Administration’s previous Blueprint for an AI Bill of Rights and outlines a comprehensive strategy aimed at positioning the United States as a leader in the guidance of responsible AI development and application.

While recognizing the potential of responsible AI systems to make the world more prosperous, productive, innovative, and secure, it also acknowledged that irresponsible use could make societal issues such as fraud, discrimination, bias, and disinformation worse, as well as resulting in displaced and disempowered workers, stifled competition, and risks to national security.

The Executive Order also recognizes that addressing these issues requires the coordination of the government, private sector, academia, and civil society. While most of the requirements in the Executive Order apply only to the federal government, private businesses may be impacted by a few requirements that directly apply to private enterprises, requirements that may apply to businesses that contract with the federal government for the use of AI, and any resulting statutes, regulations, or guidance that are required to be developed under the Executive Order.

Guiding Principles

The Executive Order sets out eight guiding principles and priority regarding the responsible development and use of AI:

Regulatory Requirements

The Executive Order sets forth a number of specific requirements designed to realize these principles. While much of the Executive Order creates obligations for cabinet members and agency heads, the results of many of these activities are likely to impact private businesses. Specifically, the Executive Order requires that cabinet members and agency heads, generally in cooperation with private industry, to enact policies and procedures and take other actions that may impact private businesses in the following ways:

Key Takeaways

The definition and impact is seemingly broad. The Executive Order defines an “AI system” to include any data system, software, hardware, application, tool, or utility that operates in whole or in part with AI. The current definition does not narrow what specific products, software, or geographic locations are included in an “AI system.” This leaves it open to interpretation as to how far the new rules and regulations will reach.

This is the start of potentially rigorous regulation. Many agencies have been tasked with directives to develop national standards to ensure the safety and security of AI. It is possible that some agencies will develop standards or regulations that will mirror or even conflict with each other. However, until the agencies start promulgating regulations it is unclear how this will affect AI in the long-term.

The administration is not shy about their desire to promote competition. Not only has the Executive Order called for small developers and entrepreneurs to be given access to technical assistance and resources to help commercialize AI breakthroughs, but it calls for streamlining the process for noncitizens to conduct research in AI and other critical and emerging technologies. Further, a pilot of the National AI Research Resource will be launched as a tool to provide AI researchers and students access to key AI resources and data.

Concern for safety and security is at the forefront. The Executive Order focuses on protecting individuals’ rights and safety, with hopes to mitigate the risk of discrimination and bias from AI systems. Specifically, there are directions to advance equity and civil rights, reduce the potential for misleading the public, and support workers in workplace surveillance, bias and job displacement.

Looking Ahead

Much of the Executive Order directs cabinet members and heads of agencies to develop guidance or regulations regarding the use of AI that has yet to be developed. That being said, it also imposes deadlines on federal agencies to issue reports and draft guidelines that address key concerns echoed by the administration. If Congress enacts further legislation in response to the Executive Order, it is vital for businesses to be attentive to these new rules and take necessary steps to comply.

Companies should ensure they have assistance from counsel to help guide them through this increasingly complex legal landscape. Continuing to monitor for such guidance or regulations and preparing to implement them as appropriate when they are finally released will go a long way to maximizing the benefits of AI while minimizing disruption to operations. In the meantime, organizations may wish to begin self-audits and monitor systems currently under development or already deployed for potential data privacy and cybersecurity risks, erroneous outputs, and bias.

Mackenzie N. Barrett and Robin R. Zhang contributed to this article.
 


© 2024 Foley & Lardner LLP
National Law Review, Volumess XIII, Number 320