Shortcut to Body Shortcut to main menu

News & Event

  • Home
  • News
  • News & Event
Korea to establish AI safety institute at ETRI
Date
2024.05.23
Views
169

[Courtesy of Korea Institute of Science and Technology]

According to Pulse by Maeil Business News Korea,

South Korea will establish an institute focused on studying safety issues related to artificial intelligence (AI) technology.

The plan was unveiled during a press briefing following the ministerial session at the AI Seoul Summit held at the Korea Institute of Science and Technology (KIST) in Seoul on Wednesday.

“Today, we are witnessing another wave of innovation driven by unremitting efforts from private AI big tech companies such as OpenAI Inc. and Google LLC,” said Minister of Science and ICT Lee Jong-ho. “But we cannot ignore our anxiety over associated risks to our society. In this regard, we plan to open an AI Safety Institute at the Electronics and Telecommunications Research Institute (ETRI).”

ETRI is headquartered in Daejeon, South Chungcheong Province.

The launch of these institutes is a global trend, with the United States launching a consortium and countries like the United Kingdom, Canada, and Japan setting up their own institutes.

“We will strengthen cooperation among AI safety institutes and take measures to identify AI-generated content, such as watermarking such content, and enhancing collaboration for the development of international standards,” Lee said.

On the same day, ministerial level representatives of the countries participating in the AI Seoul Summit adopted a joint statement calling for the advancement of the safety, innovation, and inclusivity of AI technology.

The core of safety in the Ministerial Statement is to build a framework to measure AI risks, particularly to identify critical thresholds where AI surpasses safe levels.

Additionally, there will be a push for innovation by actively introducing AI in sectors such as administration, welfare, education, and healthcare.

The inclusion aspect will focus on bridging the digital divide through AI education.

Meanwhile, Andrew Ng, a renowned AI scholar and professor at Stanford University, who delivered the keynote speech at the summit, emphasized the need to distinguish between technology and its applications when seeking various opportunities from AI.

“For example, large language models (LLMs) can create diverse applications like medical devices, chatbots, and deepfakes,” he said. “It is crucial to differentiate which applications are good and which are bad.”

He added: ”If regulations succeed, everyone will become a loser because access to AI technology will inevitably decrease. The issue of regulating open source needs more careful consideration.“



By Jung Sang-bong and Han Yubin


Copyrights Pulse by Maeil Business News Korea. All Rights Reserved.



Source: Pulse by Maeil Business News Korea (May 23, 2024)