Ahead of the AI safety summit starting in Seoul, South Korea later this week, its co-host the United Kingdom is expanding its efforts in the field. The AI Safety Institute – a U.K. body established in November 2023 with the goal of assessing and addressing risks in AI platforms – announced it will open a second location in San Francisco.
The aim is to be closer to the hub of AI development, with the Bay Area being home to companies like OpenAI, Anthropic, Google, and Meta, among others working on foundational AI technologies.
Foundational models are crucial for generative AI services and other applications. It’s notable that despite the U.K. signing an MOU with the U.S. to collaborate on AI safety initiatives, the U.K. is choosing to establish a presence in the U.S. to address the issue directly.
“Having a presence in San Francisco will allow access to the headquarters of many AI companies,” said Michelle Donelan, the U.K. secretary of state for science, innovation, and technology, in an interview. “While several of them have bases in the United Kingdom, having a base there would be beneficial, providing access to additional talent and enabling closer collaboration with the United States.”
For the U.K., proximity to the epicenter of AI development is not just about understanding what is being built but also about gaining visibility with these firms. AI and technology are seen as significant opportunities for economic growth and investment by the U.K.
The recent turmoil at OpenAI regarding its Superalignment team makes this a particularly timely moment to establish a presence in San Francisco.
The AI Safety Institute, launched in November 2023, currently has just 32 employees. This is modest compared to the billions of dollars invested in AI tech companies, which have their economic motivations for releasing their technologies to paying users.
One of the AI Safety Institute’s significant developments was the release of Inspect, a set of tools for testing the safety of foundational AI models, earlier this month.
Donelan referred to the release as a “phase one” effort. Benchmarking models has proven challenging, and engagement is currently an opt-in and inconsistent process. Companies are not legally obligated to have their models vetted, and not all are willing to undergo pre-release vetting, which means risks may be identified too late.
The AI Safety Institute is still developing the best ways to engage with AI companies for evaluation. “Our evaluations process is an emerging science in itself,” Donelan said. “Each evaluation helps us refine the process further.”
One of the goals in Seoul is to present Inspect to regulators at the summit, encouraging them to adopt it.
“Now we have an evaluation system. Phase two involves ensuring AI safety across society,” Donelan said.
In the long term, the U.K. aims to develop more AI legislation. However, following Prime Minister Rishi Sunak’s approach, it will avoid legislating until it fully understands the scope of AI risks.
“We do not believe in legislating without a full understanding,” Donelan said, noting that a recent international AI safety report highlighted significant gaps in research and the need for global research incentives.
“Legislation takes about a year in the United Kingdom. If we had started legislating instead of organizing the AI Safety Summit last year, we would still be legislating now without any outcomes,” she added.
“Since the Institute’s inception, we’ve emphasized the importance of an international approach to AI safety, sharing research, and collaborating with other countries to test models and anticipate AI risks,” said Ian Hogarth, chair of the AI Safety Institute. “Today marks a pivotal moment to advance this agenda, and we are proud to expand our operations in an area rich with tech talent, adding to the expertise that our London team has built since the beginning.”