Towards responsible AI governance: progress from the Seoul AI Summit


Second edition of “AI Safety Summit‘, co-hosted by South Korea and the United Kingdom, was held in Seoul on 21 and 22 May. Among the commitments made historic agreement major companies in the industry, the “Seoul AI Business Pledge”, the Seoul Declaration and the development of the AI ‚Äč‚ÄčInstitute Network.

The first day of the Seoul summit was held virtually, bringing together world and industry leaders.

Seoul Trade Commitment AI

Sixteen industry leaders, including Amazon, Google, Microsoft, Meta, Mistral AI and OpenAI, have committed to AI security measures. As part of this voluntary commitment, entitled “Seoul AI Business Pledge”, will securely develop and deploy their cutting-edge AI models and share best practices within security frameworks.

Creating a network of AI institutes

On the sidelines of the 1st World Summit in November last year, British Prime Minister Rishi Sunak launched the “AI Safety Institute”, a project that has received massive support from world leaders and major AI companies. Its mission is to rigorously test new AI models before and after their release so that they can “address the potentially dangerous capabilities of AI models, including exploring all risks, from societal harms such as bias and misinformation to the most unlikely but extreme risks such as a complete loss of control over ‘AI humanity’.”

In Seoul, France, Germany, Italy, the United Kingdom, the United States, Singapore, Japan, South Korea, Australia and Canada committed to developing a network of similar institutes to share information on the risks, opportunities and limitations of artificial intelligence models.

The Seoul Declaration

On May 22, during a face-to-face meeting of ministers, representatives of 28 countries and heads of international organizations adopted the Seoul Declaration, presented a day earlier during a video conference, aimed at promoting security, innovation and the inclusion of AI.



Source link

Leave a Comment