Enlarge /
They know…
Aurich / Getty reader comments 3 US president Joe Bidenâs plan for containing the dangers of artificial intelligence already risks being derailed by congressional bean counters.
A White House executive order on AI announced in October calls on the US to develop new standards for stress-testing AI systems to uncover their biases, hidden threats, and rogue tendencies.
But the agency tasked with setting these standards, the National Institute of Standards and Technology (NIST), lacks the budget needed to complete that work independently by the July 26, 2024, deadline, according to several people with knowledge of the work.
Speaking at the NeurIPS AI conference in New Orleans last week, Elham Tabassi, associate director for emerging technologies at NIST, described this as âan almost impossible deadlineâ for the agency.
Some members of Congress have grown concerned that NIST will be forced to rely heavily on AI expertise from private companies that, due to their own AI projects, have a vested interest in shaping standards.
The US government has already tapped NIST to help regulate AI.
In January 2023 the agency released an AI risk management framework to guide business and government.
NIST has also devised ways to measure public trust in new AI tools.
But the agency, which standardizes everything from food ingredients to radioactive materials and atomic clocks , has puny resources compared to those of the companies on the forefront of AI.
OpenAI, Google, and Meta each likely spent upwards of $100 million to train the powerful language models that undergird applications such as ChatGPT , Bard , and Llama 2 .
Advertisement NISTâs budget for 2023 was $1.6 billion, and the White House has requested that it be increased by 29 percent in 2024 for initiatives not directly related to AI.
Several sources familiar with the situation at NIST say that the agencyâs current budget will not stretch to figuring out AI safety testing on its own.
On December 16, the same day Tabassi spoke at NeurIPS, six members of Congress signed a bipartisan open letter raising concern about the prospect of NIST enlisting private companies with little transparency.
âWe have learned that NIST intends to make grants or awards to outside organizations for extramural research,â they wrote.
The letter warns that there does not appear to be any publicly available information about how those awards will be decided.
The lawmakersâ letter also claims that NIST is being rushed to define standards even though research into testing AI systems is at an early stage.
As a result there is âsignificant disagreementâ among AI experts over how to work on or even measure and define safety issues with the technology, it states.
âThe current state of the AI safety research field creates challenges for NIST as it navigates its leadership role on the issue,â the letter claims.
NIST spokesperson Jennifer Huergo confirmed that the agency had received the letter and said that it âwill respond through the appropriate channels.â
NIST is making some moves that would increase transparency, including issuing a request for information on December 19, soliciting input from outside experts and companies on standards for evaluating and red-teaming AI models.
It is unclear if this was a response to the letter sent by the members of Congress.
The concerns raised by lawmakers are shared by some AI experts who have spent years developing ways to probe AI systems.
âAs a nonpartisan scientific body, NIST is the best hope to cut through the hype and speculation around AI risk,â says Rumman Chowdhury, a data scientist and CEO of Parity Consulting who specializes in testing AI models for bias and other problems .
âBut in order to do their job well, they need more than mandates and well wishes.â
Advertisement Yacine Jernite , machine learning and society lead at Hugging Face, a company that supports open source AI projects, says big tech has far more resources than the agency given a key role in implementing the White Houseâs ambitious AI plan.
“NIST has done amazing work on helping manage the risks of AI, but the pressure to come up with immediate solutions for long-term problems makes their mission extremely difficult,â Jernite says.
âThey have significantly fewer resources than the companies developing the most visible AI systems.â
Margaret Mitchell, chief ethics scientist at Hugging Face, says the growing secrecy around commercial AI models makes measurement more challenging for an organization like NIST.
âWe can’t improve what we can’t measure,â she says.
The White House executive order calls for NIST to perform several tasks, including establishing a new Artificial Intelligence Safety Institute to support the development of safe AI.
In April, a UK taskforce focused on AI safety was announced .
It will receive $126 million in seed funding.
The executive order gave NIST an aggressive deadline for coming up with, among other things, guidelines for evaluating AI models, principles for âred-teamingâ (adversarially testing) models , developing a plan to get US-allied nations to agree to NIST standards, and coming up with a plan for âadvancing responsible global technical standards for AI development.â
Although it isnât clear how NIST is engaging with big tech companies, discussions on NISTâs risk management framework, which took place prior to the announcement of the executive order, involved Microsoft; Anthropic, a startup formed by ex-OpenAI employees that is building cutting-edge AI models; Partnership on AI, which represents big tech companies; and the Future of Life Institute, a nonprofit dedicated to existential risk, among others.
âAs a quantitative social scientist, Iâm both loving and hating that people realize that the power is in measurement,â Chowdhury says.
This story originally appeared on wired.com .
Source: arstechnica
No Comments