One of the only U.S. government offices dedicated to assessing AI safety is in danger of being dismantled if Congress doesn’t choose to authorize it.
The US AI Safety Institute stands on shaky ground
Kyle Wiggers8:13 AM PDT · October 22, 2024
The U.S. AI Safety Institute (AISI), a federal government body that studies risks in AI systems, was created in November 2023 as a part of President Joe Biden’s AI Executive Order. The AISI operates within NIST, an agency of the Commerce Department that develops guidance for the deployment of various categories of technologies.
But while the AISI has a budget, a director, and a research partnership with its counterpart in the U.K., the U.K. AI Safety Institute, it could be wound down with a simple repeal of Biden’s executive order.
In a letter today, a coalition of over 60 companies, nonprofits, and universities called on Congress to enact legislation codifying the AISI before the end of the year.
- Among the undersigners are OpenAI and Anthropic, both of which have signed agreements with the AISI to collaborate on AI research, testing, and evaluation.
The Senate and House have each advanced bipartisan bills to authorize the activities of the AISI. But the proposals have faced some opposition from conservative lawmakers, including Sen. Ted Cruz (R-Texas), who’s called for the Senate version of the AISI bill to pull back on diversity programs.
- Granted, the AISI is a relatively weak organization from an enforcement perspective. Its standards are voluntary.
- But think tanks and industry coalitions — as well as tech giants like Microsoft, Google, Amazon, and IBM, all of which signed the aforementioned letter — see the AISI as the most promising avenue to AI benchmarks that can form the basis of future policy.
LINK >> TechCrunch
No comments:
Post a Comment