Seoul Summit Leaders Pledge AI Safety Commitment

0:00

Government officials and AI industry leaders agreed on Tuesday to implement basic safety measures in the rapidly evolving field and create an international safety research network.

Nearly six months after the first global summit on AI safety at Bletchley Park in England, Britain and South Korea are co-hosting the AI safety summit this week in Seoul. The event highlights the new challenges and opportunities arising from advancements in AI technology.

The British government announced on Tuesday a new agreement between 10 countries and the European Union to establish an international network akin to the U.K.’s AI Safety Institute, the world’s first publicly supported organization, to accelerate AI safety science. The network aims to foster a common understanding of AI safety and synchronize its efforts with research, standards, and testing. Australia, Canada, the EU, France, Germany, Italy, Japan, Singapore, South Korea, the U.K., and the U.S. have signed the agreement.

On the first day of the AI Summit in Seoul, global leaders and prominent AI companies gathered for a virtual meeting led by U.K. Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol to discuss AI safety, innovation, and inclusion.

During the talks, leaders endorsed the broader Seoul Declaration, stressing the need for increased international collaboration in developing AI that is “human-centric, trustworthy, and responsible” to tackle major global issues, uphold human rights, and close digital divides worldwide.

“AI is a hugely exciting technology — and the UK has led global efforts to deal with its potential, hosting the world’s first AI Safety Summit last year,” Sunak stated in a UK government announcement. “But to get the upside, we must ensure it’s safe. That’s why I’m delighted we have got an agreement today for a network of AI Safety Institutes.”

Last month, the U.K. and the U.S. finalized a partnership memorandum of understanding to collaborate on AI safety research, safety evaluation, and guidance.

The latest agreement follows the world’s first AI Safety Commitments from 16 AI companies, including Amazon, Anthropic, Cohere, Google, IBM, Inflection AI, Meta, Microsoft, Mistral AI, Open AI, Samsung Electronics, Technology Innovation Institute, xAi, and Zhipu.ai. (Zhipu.ai is a Chinese company backed by Alibaba, Ant and Tencent.)

The AI companies, including those from the U.S., China, and the United Arab Emirates (UAE), have pledged to safety commitments to “not develop or deploy a model or system at all if mitigations cannot keep risks below the thresholds,” as per the U.K. government statement.

“It’s a world first to have so many leading AI companies from so many different parts of the globe all agreeing to the same commitments on AI safety,” Sunak noted. “These commitments ensure the world’s leading AI companies will provide transparency and accountability on their plans to develop safe AI.”

Kate Park
Kate Park
Reporter with a focus on technology, startups and venture capital in Asia. Kate previously was a financial journalist at Mergermarket covering M&A, private equity and venture capital.

Latest stories

Ad

Related Articles

Leave a reply

Please enter your comment!
Please enter your name here
Captcha verification failed!
CAPTCHA user score failed. Please contact us!
Ad
Continue on app