23 October 2024

TechCrunch Report: The US AI Safety Institute stands on shaky ground

One of the only U.S. government offices dedicated to assessing AI safety is in danger of being dismantled if Congress doesn’t choose to authorize it. 

The US AI Safety Institute stands on shaky ground

Kyle Wiggers
8:13 AM PDT · October 22, 2024 

The U.S. AI Safety Institute (AISI), a federal government body that studies risks in AI systems, was created in November 2023 as a part of President Joe Biden’s AI Executive Order. The AISI operates within NIST, an agency of the Commerce Department that develops guidance for the deployment of various categories of technologies.

But while the AISI has a budget, a director, and a research partnership with its counterpart in the U.K., the U.K. AI Safety Institute, it could be wound down with a simple repeal of Biden’s executive order.

“If another president were to come into office and repeal the AI Executive Order, they would dismantle the AISI,” Chris MacKenzie, senior director of communications at Americans for Responsible Innovation, an AI lobby group, told TechCrunch. 
“And [Donald] Trump has promised to repeal the AI Executive Order. So Congress formally authorizing the AI Safety Institute would ensure its continued existence regardless of who’s in the White House.”


In a letter today, a coalition of over 60 companies, nonprofits, and universities called on Congress to enact legislation codifying the AISI before the end of the year. 

  • Among the undersigners are OpenAI and Anthropic, both of which have signed agreements with the AISI to collaborate on AI research, testing, and evaluation.

The Senate and House have each advanced bipartisan bills to authorize the activities of the AISI. But the proposals have faced some opposition from conservative lawmakers, including Sen. Ted Cruz (R-Texas), who’s called for the Senate version of the AISI bill to pull back on diversity programs.


  • Granted, the AISI is a relatively weak organization from an enforcement perspective. Its standards are voluntary. 
  • But think tanks and industry coalitions — as well as tech giants like Microsoft, Google, Amazon, and IBM, all of which signed the aforementioned letter — see the AISI as the most promising avenue to AI benchmarks that can form the basis of future policy.

There’s also concern among some interest groups that allowing the AISI to fold would risk ceding AI leadership to foreign nations. 
During an AI summit in Seoul in May 2024, international leaders agreed to form a network of AI Safety Institutes comprising agencies from Japan, France, Germany, Italy, Singapore, South Korea, Australia, Canada, and the European Union in addition to the U.K. and U.S. . ."

LINK >> TechCrunch

No comments: