Skip to Content

Press Releases

Reps Lieu, Nunn, Beyer, Molinaro Introduce Bipartisan Bill To Establish AI Guidelines For Federal Agencies And Vendors

Today, Congressmembers Ted W. Lieu (D-Los Angeles County), Zach Nunn (R-IA), Don Beyer (D-VA), and Marcus Molinaro (R-NY) introduced the Federal Artificial Intelligence Risk Management Act, a bipartisan and bicameral bill to require U.S. federal agencies and vendors to follow the AI risk management guidelines put forth by the National Institute of Standards and Technology (NIST). Senators Jerry Moran (R-KS) and Mark R. Warner (D-VA) introduced companion legislation in the Senate late last year. 

Congress directed the National Institute of Standards and Technology (NIST) to develop an AI Risk Management Framework that organizations, public and private, could employ to ensure they use AI systems in a trustworthy manner. This framework was released earlier last year and is supported by a wide range of public and private sector organizations, but federal agencies are not currently required to use this framework to manage their use of AI systems.

The Federal Artificial Intelligence Risk Management Act would require federal agencies and vendors to incorporate the NIST framework into their AI management efforts to help limit the risks that could be associated with AI technology.

“As AI continues to develop rapidly, we need a coordinated government response to ensure the technology is used responsibly and that individuals are protected,” said Rep. Lieu. “The AI Risk Management Framework developed by NIST is a great starting point for agencies and vendors to analyze the risks associated with AI and to mitigate those risks. These guidelines have already been used by a number of public and private sector organizations, and there is no reason why they shouldn’t be applied to the federal government as well. I’m grateful to my House and Senate colleagues from both sides of the aisle for their partnership in this effort to promote safe AI use within the federal government and to allow the United States to continue to lead on AI.” 

“As the federal government expands its use of innovative AI technology, it becomes increasingly important to safeguard against AI’s potential risks. Our bill, which would require the federal government to put into practice the excellent risk mitigation and AI safety frameworks developed by NIST, is a natural starting point,” said Rep. Beyer. “By ensuring that federal agencies have the necessary tools to navigate the complexities of AI, we can ensure both the trustworthiness and effectiveness of AI systems used by the government and encourage other organizations and companies to adopt similar standards. This bill lays the foundation for harnessing the power of AI for the benefit of the American people, while upholding the highest standards of accountability and transparency.”

“Technological advancement is good for society, and if done right, can make the government more effective,” said Rep. Nunn. “As the federal government implements AI towards this end, we must ensure that Americans’ data is safe and the government is transparent about what it is doing. This bipartisan bill will ensure we’re doing everything we can to protect the American people while leveraging the full capabilities of new technology.”

“While we’re only beginning to realize the full power of AI – we have to recognize it is here and being widely utilized,” said Rep. Molinaro. “Congress must provide guidance to operate AI safely and close cybersecurity gaps. Congress took an important step forward by directing NIST to develop an AI Risk Management Framework. Our bipartisan bill will require federal agencies to adopt this framework to help unleash the potential of AI, while keeping federal assets secure.” 

“AI has tremendous potential to improve the efficiency and effectiveness of the federal government, in addition to the potential positive impacts on the private sector,” said Sen. Moran. “However, it would be naïve to ignore the risks that accompany this emerging technology, including risks related to data privacy and challenges verifying AI-generated data. The sensible guidelines established by NIST are already being utilized in the private sector and should be applied to federal agencies to make certain we are protecting the American people as we apply this technology to government functions.”

“The rapid development of AI has shown that it is an incredible tool that can boost innovation across industries,” said Sen. Warner. “But we have also seen the importance of establishing strong governance, including ensuring that any AI deployed is fit for purpose, subject to extensive testing and evaluation, and monitored across its lifecycle to ensure that it is operating properly. It’s crucial that the federal government follow the reasonable guidelines already outlined by NIST when dealing with AI in order to capitalize on the benefits while mitigating risks.”

“As a long-standing champion and early adopter of the NIST AI Risk Management Framework, Workday welcomes today’s introduction of the Federal AI Risk Management Framework Act," said Chandler C. Morse, Vice President of Public Policy, Workday. "This bipartisan proposal would advance responsible AI by directing both federal agencies and companies selling AI in the federal marketplace to adopt the NIST Framework. Leveraging the buying power of the federal government will also send an important message to the private sector and go a long way towards building trust in AI. We congratulate Representatives Lieu, Nunn, Beyer, and Molinaro for their leadership and encourage Congress to act in support of the bill’s adoption.”

“Implementing a widely recognized risk management framework by the U.S. Government can harness the power of AI and advance this technology safely," said Fred Humphries, Corporate Vice President, U.S. Government Affairs, Microsoft. "We look forward to working with Representatives Lieu, Nunn, Beyer, and Molinaro as they advance this framework."

“Okta is a strong proponent of interoperability across technical standards and governance models alike and as such we applaud Representatives Lieu, Nunn, Beyer, and Molinaro for their bipartisan Federal AI Risk Management Framework Act,” said Michael Clauser, Director, Head of US Federal Affairs, Okta. “This bill complements the Administration’s recent Executive Order on Artificial Intelligence (AI) and takes the next steps by providing the legislative authority to require federal software vendors and government agencies alike to develop and deploy AI in accordance with the NIST AI Risk Management Framework (RMF). The RMF is a quality model for what public-private partnerships can produce and a useful tool as AI developers and deployers govern, map, measure, manage, and mitigate risk from low- and high-impact AI models alike.”

“IEEE-USA heartily supports the Federal Artificial Intelligence Risk Management Act of 2024,” said Russell Harrison, Managing Director, IEEE-USA. “Making the NIST Risk Management Framework (RMF) mandatory helps protect the public from unintended risks of AI systems yet permits AI technology to mature in ways that benefit the public. Requiring agencies to use standards, like those developed by IEEE, will protect both public welfare and innovation by providing a useful checklist for agencies implementing AI systems. Required compliance does not interfere with competitiveness; it promotes clarity by setting forth a ‘how-to.’”

“Procurement of AI systems is challenging because AI evaluation is a complex topic and expertise is often lacking in government.” said Dr. Arvind Narayanan, Professor of Computer Science, Princeton University. “It is also high-stakes because AI is used for making consequential decisions. The Federal Artificial Intelligence Risk Management Act tackles this important problem with a timely and comprehensive approach to revamping procurement by shoring up expertise, evaluation capabilities, and risk management.”

"Risk management in AI requires making responsible choices with appropriate stakeholder involvement at every stage in the technology's development; by requiring federal agencies to follow the guidance of the NIST AI Risk Management Framework to that end, the Federal AI Risk Management Act will contribute to making the technology more inclusive and safer overall,” said Yacine Jernite, Machine Learning & Society Lead, Hugging Face. “Beyond its direct impact on the use of AI technology by the Federal Government, this will also have far-reaching consequences by fostering more shared knowledge and development of necessary tools and good practices. We support the Act and look forward to the further opportunities it will bring to build AI technology more responsibly and collaboratively."

“The Enterprise Cloud Coalition supports the Federal AI Risk Management Act of 2024, which mandates agencies adopt the NIST AI Risk Management Framework to guide the procurement of AI solutions,” said Andrew Howell, Executive Director, Enterprise Cloud Coalition. “By standardizing risk management practices, this act ensures a higher degree of reliability and security in AI technologies used within our government, aligning with our coalition's commitment to trust in technology. We believe this legislation is a critical step toward advancing the United States' leadership in the responsible use and development of artificial intelligence on the global stage.”

A one-page explanation of the legislation can be found here.

The full text of the bill can be found here