South Korea Launches World’s First Fully Enforced AI Law with Mixed Reactions
January 29, 2026
South Korea has introduced the world’s first fully enforced AI law. It requires companies to label AI-generated content clearly. The AI Basic Act took effect last Thursday.
Under the law, companies must add invisible watermarks for AI-generated art and visible labels for realistic deepfakes. "High-impact AI" systems in areas like medical diagnosis, hiring, or loans must undergo risk assessments and explain decisions. Extremely powerful AI models need safety reports, but no current models meet that high bar.
Companies breaking the rules risk fines up to 30 million won (about £15,000). There is a grace period of at least one year without penalties. South Korea hopes this law will help it become a top AI power alongside the US and China.
Experts say the law mostly promotes industry and leaves room to evolve. Alice Oh, KAIST professor, said it "was not perfect, but intended to evolve without stifling innovation." However, 98% of AI startups said they were unprepared. Lim Jung-wook from Startup Alliance voiced frustration: "There’s a bit of resentment. Why do we have to be the first to do this?"
Companies must decide if their AI counts as high-impact, which critics say is complex and uncertain. Only large foreign firms like Google and OpenAI face similar rules, causing concerns about fairness.
Civil groups argue the law fails to protect people harmed by AI. South Korea sees over half of global deepfake porn victims. Groups say the law protects AI users like hospitals, not citizens impacted by AI. They also note loopholes due to human involvement exemptions.
The human rights commission criticized unclear definitions in the law, saying vulnerable groups remain unprotected. The Ministry of Science and ICT said the law will reduce legal uncertainty and develop a safe AI ecosystem. It will update rules through new guidelines.
Experts note South Korea chose a different path from strict EU rules or market-driven US approaches. Law professor Melissa Hyesun Yoon said Korea’s flexible, trust-based system "will serve as a useful reference in global AI governance."
Read More at Theguardian →
Tags:
South korea
Ai regulation
Artificial intelligence
Deepfakes
Tech Startups
Ai Law Compliance
Comments