main ad

The US AI Safety Institute is in a precarious position

A key U.S. government office focused on AI safety could be shut down unless Congress steps in to authorize it.

The U.S. AI Safety Institute (AISI), established in November 2023 under President Joe Biden’s AI Executive Order, is tasked with evaluating risks related to AI systems. It operates within NIST, an agency of the Commerce Department responsible for developing technology guidelines.

Although the AISI has a budget, leadership, and a research collaboration with the U.K.’s AI Safety Institute, it could be dismantled if Biden’s executive order is repealed.

"If a future president rescinds the AI Executive Order, the AISI would be dissolved," said Chris MacKenzie, communications director at Americans for Responsible Innovation, an AI advocacy group. MacKenzie added that former President Trump has vowed to repeal the order. Congressional authorization would safeguard the AISI’s existence, ensuring it remains in place regardless of presidential changes.




Beyond securing its future, formal authorization could also lead to more consistent funding from Congress. Currently, the AISI operates with a $10 million budget, a modest sum given Silicon Valley’s vast AI investments.

MacKenzie explained that Congress tends to prioritize funding for entities that are formally authorized, as they are viewed as long-term investments rather than temporary initiatives.

Today, over 60 organizations, including companies, nonprofits, and universities, urged Congress to pass legislation that would officially establish the AISI. Supporters include OpenAI and Anthropic, both of which collaborate with the AISI on AI research and evaluation.

Both the Senate and the House have introduced bipartisan bills to formalize the AISI’s role, though opposition has emerged from some conservative lawmakers, like Sen. Ted Cruz (R-Texas), who has raised concerns over diversity programs in the legislation.

While the AISI's standards are voluntary and lack enforcement power, tech leaders such as Microsoft, Google, Amazon, and IBM see the organization as a critical pathway to creating AI benchmarks that could influence future regulations.

There’s also concern that closing the AISI would hand AI leadership to other countries. At a May 2024 AI summit in Seoul, global leaders agreed to establish a network of AI Safety Institutes across several nations, including Japan, Germany, South Korea, and the European Union.

Jason Oxman, CEO of the Information Technology Industry Council, emphasized that Congress has an opportunity to prevent the U.S. from losing ground in AI by permanently authorizing the AISI. He urged lawmakers to act before the end of the year to secure the institute’s critical role in fostering U.S. AI leadership and innovation.

Post a Comment

0 Comments