Before the AI safety meeting starts later this week in Seoul, South Korea, the UK, which is also hosting it, is stepping up its own work in the area. They opened a new office in San Francisco. The AI Safety Institute is a U.K. group that was created in November 2023 with the big goal of evaluating and fixing risks in AI systems.
The goal is to get closer to where AI is being developed the most. Some of the most important companies in AI are based in the Bay Area. These include OpenAI, Anthropic, Google, and Meta.
Generated AI services and other apps depend on foundational models. It’s interesting that the U.K. is still choosing to set up shop in the U.S. to deal with the issue, even though the two countries have signed an MOU to work together on AI safety projects.
Michelle Donelan, the U.K.’s secretary of state for science, innovation, and technology, told TechCrunch, “Having people on the ground in San Francisco will give them access to the headquarters of many of these AI companies.” “Some of them already have bases here in the UK, but we think it would be very helpful to have one there too. That way, they could benefit from an even larger pool of talent and work even more closely with the US.”
Part of the reason is that being closer to the core helps people understand what is being built and also makes the U.K. more visible to these companies. That’s important because the U.K. sees AI and technology as a big chance for business and economic growth.
And with all the recent trouble at OpenAI involving its Superalignment team, it seems like the perfect time to set up a position there.
The AI Safety Institute, which began in November 2023, is not very big these days. The group only has 32 employees, making it look like David versus Goliath in the world of AI technology. This is because companies that build AI models have billions of dollars riding on them, and they want to get their technologies into the hands of paid users for financial reasons.
The AI Safety Institute made a big step forward when it released Inspect earlier this month. It is its first set of tools for testing the safety of basic AI models.
Donelan talked about that release today as a “effort in phase one.” Not only is it hard to compare models, but participation is also very much an opt-in and inconsistent process right now. A top source at a U.K. regulator said that companies are not required by law to have their models checked out at this point, and some companies don’t want to have their models checked out before they come out. This could mean that the horse may have already run when risk could be seen.
Donelan said that the AI Safety Institute was still working on ways to get AI companies to help them test their products. She said, “Our evaluation process is a new science in and of itself.” “Every time we do an evaluation, we will improve the process and make it work better.”
Donelan said that one goal of the meeting in Seoul is to show regulators Inspect and try to get them to adopt it too.
“Now we have a way to evaluate.” That’s why phase two should also be about making AI safe for everyone, she said.
Longer term, Donelan thinks the U.K. will pass more AI laws, but it will wait to do so until it fully knows the risks associated with AI, which is in line with what the country’s Prime Minister, Rishi Sunak, has said about the matter.
“We don’t believe in passing laws before we fully understand and have a grip on the situation,” she said, adding that the institute’s most recent international AI safety report, which was mostly about trying to get a full picture of research done so far, “highlighted that there are big gaps missing and that we need to encourage and incentivize more research globally.”
“In the UK, laws are made over the course of about a year.” According to Donelan, “if we had just started passing laws when we started instead of planning the AI Safety Summit in November of last year, we’d still be passing laws now and have nothing to show for it.”
Also Read: This Camera Gives Ai Poems in Exchange for Pictures
“Since the beginning of the Institute, we have been clear on how important it is to take an international approach to AI safety, share research, and work together with other countries to test models and plan for the risks of frontier AI,” said Ian Hogarth, chair of the AI Safety Institute. “Today is a turning point that lets us move this agenda forward even more. We’re excited to be expanding our operations to a place brimming with tech talent, adding to the incredible knowledge that our staff in London has brought from the start.”
What do you say about this story? Visit Parhlo World For more.