
On January 22, the Framework Act on the Development of Artificial Intelligence and the Establishment of a Trust-Based AI Society (hereafter, the AI Framework Act) will take effect—the first law of its kind in the world to be implemented. The act was enacted with the aim of promoting the AI industry and building a foundation of trust in an AI-driven society. However, it has drawn criticism from civil society groups for being overly focused on industrial promotion while neglecting human rights and information security. At the same time, critics argue that the law’s regulatory provisions are insufficiently clear, increasing uncertainty even from an industry perspective. As the Ministry of Science and ICT plans to release detailed guidelines in January, controversy surrounding the act continues. Below is a Q&A summarizing the background of the debate.
— Why is the AI Framework Act necessary?
“AI is rapidly entering many areas of daily life and directly affecting people’s lives. Although AI’s impact is significant, existing laws are insufficient to address AI-related issues. Major countries such as the European Union (EU) and Japan are also moving forward with AI legislation.”
— What will change once it takes effect?
“Various legal concepts related to AI will be introduced. Notably, the concept of ‘high-impact AI’ will be created to regulate AI systems that significantly affect humans. High-impact AI refers to AI that has—or may pose—a serious impact or risk to human life, physical safety, or fundamental rights. Specifically, this includes AI used in areas such as energy supply, medical devices, criminal investigations, hiring and loan screening, transportation infrastructure, and other systems that may substantially affect human life, physical safety, or fundamental rights.”
— What regulations will apply to high-impact AI?
“Given their significant effects, obligations will be imposed to ensure safety. These include requirements to establish management measures for high-impact AI and to prepare and retain documentation demonstrating the system’s safety and reliability. An ‘impact assessment’ must also be conducted to evaluate effects on fundamental rights.”
— Why does controversy continue around the AI Framework Act?
“Industry groups are deeply concerned about the act’s regulatory nature. In particular, high-impact AI entails many obligations, and there is concern that the scope defining high-impact AI is broad and vague—raising fears that virtually all AI systems could fall under this category. Industry voices argue that, at a time when AI is closely tied to national competitiveness, imposing such regulations is like ‘adding shackles before running.’”
— Does that mean the act focuses too much on regulation?
“Civil society groups argue the opposite: that the AI Framework Act is biased toward industrial promotion. They contend that those most affected by AI—such as job seekers, workers, and consumers—are given no guaranteed opportunity to speak or participate. Civil society groups call for expanding the scope of AI systems subject to safety obligations and for significantly strengthening the enforcement decree from a human rights–protection perspective.”
Posted by Freewhale98
2 Comments
[Submission text]
South Korea will implement the world’s first AI
Framework Act on January 22, aiming to promote the AI industry while building public trust, but the law has sparked controversy from both industry and civil society. The act introduces new legal concepts, most notably “high-impact AI,” covering systems that may significantly affect human life, safety, or fundamental rights, such as AI used in energy, healthcare, criminal investigations, hiring, finance, and transportation, and imposes obligations including safety measures, documentation, and human-rights impact assessments. Industry groups warn that the definition of high-impact AI is overly broad and vague, potentially creating regulatory uncertainty and harming competitiveness, while civil society argues the law is tilted toward industrial promotion and fails to adequately protect human rights or ensure participation by those most affected by AI. This kind of Big Tech regulation also sparked international backlash, with Trump administration accusing South Korea of “discriminatory trade practice”.
This is related to this sub as this debate covers the issue of technology development and government regulation.
My opinion is that we need to see specific administrative guideline first. I think AI regulation is needed but it shouldn’t so restrictive to undermine technological development.
Interesting,seems like the Koreans beat China to the punch of AI regulations