South Korea will begin enforcing the South Korea AI Basic Act on Jan. 22, 2026, creating national rules for high-impact and generative AI to boost innovation and protect users as global AI regulation accelerates.
What is the South Korea AI Basic Act, and what is changing?
South Korea’s National Assembly approved a comprehensive, national law to guide how artificial intelligence is developed, supported, and used across the country. The law is formally titled the Basic Act on the Development of Artificial Intelligence and the Establishment of a Foundation for Trustworthiness, commonly shortened to the AI Basic Act.
The AI Basic Act is built around three big goals:
- Set a national governance framework for AI.
- Support the AI industry with policy tools and public investment.
- Reduce risks from AI—especially in sensitive uses—through transparency and safety duties.
While the European Union already has the EU AI Act, South Korea’s move stands out because it is a single-country (national) framework that combines industry support with targeted obligations for higher-risk uses and for generative AI.
Timeline: How the law moved from proposal to enforcement?
South Korea debated AI “framework” legislation for several years before consolidating multiple bills and passing a single act. The law’s obligations take effect on January 22, 2026.
| Milestone | Date | What happened |
| First bill tabled | July 2020 | AI framework legislation begins formal parliamentary discussion |
| Bills consolidated | Nov. 2024 | Multiple AI governance proposals merged into one framework |
| National Assembly passage | Dec. 26, 2024 | AI Basic Act approved in plenary session |
| Promulgation / enactment | Jan. 21, 2025 | Law enacted and published |
| Implementation prep | 2025 | Lower statutes and guidance work begins |
| Law takes effect | Jan. 22, 2026 | Core requirements become enforceable |
What “high-impact AI” means under the South Korea AI Basic Act?
A central feature of the law is its focus on high-impact AI—systems used in areas where errors or misuse can affect safety or fundamental rights. The act defines high-impact AI as systems with potential to significantly impact human life, safety, or rights, when used in specified sectors.
High-impact AI sectors listed in the law
The law explicitly points to the following areas (and leaves room for additional areas to be defined by presidential decree):
| Sector / use area | Examples of where AI may trigger “high-impact” treatment |
| Energy supply | AI supporting grid operations or supply decisions |
| Drinking water production | AI used in water production processes |
| Health care systems | AI used in systems delivering health services |
| Medical devices (including digital medical products) | AI-enabled diagnostics or device decision support |
| Nuclear materials and facilities | AI supporting safety management and operations |
| Biometric analysis for investigations/arrest operations | Face, fingerprint, iris, or similar biometric use cases |
| Individual-impact decisions (rights/obligations) | Employment screening, loan or credit assessments |
| Transportation systems and safety | AI used in major operations and management of transport systems |
| Public-sector decision-making | Eligibility checks, public service determinations, fee collection |
| Education assessment | Student assessments in early childhood through secondary education |
| Additional areas by decree | Other uses affecting safety or basic rights |
What generative AI providers may need to do?
The AI Basic Act also introduces transparency duties tied to generative AI and to AI outputs that can look or sound real.
Key duties discussed in implementation planning include:
- Prior notification when providing products or services using high-impact AI or generative AI.
- Labeling obligations for generative AI outputs in specified situations.
- Special notice/labeling for synthetic media that is hard to distinguish from reality, including deepfake-style outputs.
Exactly how labels must appear—and what exceptions apply—will be clarified through lower-level rules and guidelines.
Safety duties for very powerful AI systems
Beyond sector-based “high-impact AI,” South Korea is also planning safety obligations tied to high-performance AI systems. Implementation work describes duties for operators whose AI systems exceed a threshold based on cumulative computation used for training.
This track focuses on lifecycle risk management, including identifying, assessing, and mitigating risks and building a risk management system. The exact threshold and detailed procedures are expected to be set through the enforcement decree and guidelines.
Governance: new institutions and a “control tower” approach
The AI Basic Act does more than set restrictions. It also builds governance and support structures meant to speed up AI growth while managing risks.
Key parts include:
- A national AI committee structure chaired by the president (as described in the government’s outline of the act).
- An AI Safety Institute intended to support safety and risk-reduction measures.
- A recurring AI Master Plan framework to set national strategy on a multi-year cycle.
Industry support: data, infrastructure, talent, and SMEs
A major theme of the AI Basic Act is public support for AI development. The law outlines policy tools that can include:
- Support for AI R&D and standard-setting.
- Training-data policies and systems to help access high-quality datasets.
- AI data centers and AI clusters to expand infrastructure.
- Special support for startups and small and medium-sized enterprises (SMEs).
- Measures related to AI talent development.
This “support + safeguards” structure is one reason the law is being watched closely by companies that develop models, deploy enterprise AI, or provide consumer AI services in South Korea.
Enforcement: what penalties and investigations could look like
The enforcement model described in implementation analyses points to administrative oversight tools rather than the extremely large, revenue-based fines seen in some other jurisdictions.
Highlights described in summaries of the act include:
- Government powers to investigate suspected violations and issue corrective or suspension orders.
- Administrative fines that can reach up to KRW 30 million for certain failures, including noncompliance with corrective orders, missing key notification duties, or failing to appoint a required domestic representative in applicable cases.
Foreign companies: local agent requirement
The act includes a mechanism requiring certain AI business operators without a Korean address or place of business—based on standards such as user numbers and sales—to appoint a local agent and report that appointment.
How this fits into South Korea’s wider AI governance push?
South Korea’s AI Basic Act lands alongside other policy moves aimed at consumer protection and data governance.
For example, South Korea has announced plans to require labeling of AI-generated advertising, with officials saying revisions to telecommunications-related rules would aim to bring the labeling requirement into effect in early 2026. Officials also released figures showing more than 96,700 illegal online ads for food and pharmaceutical products identified in 2024, and 68,950 through September 2025, as part of the rationale for tighter controls.
Separately, South Korea’s data protection authority suspended new downloads of the Chinese AI app DeepSeek in February 2025, citing privacy compliance issues and pointing to a stricter posture on AI services that process personal data.
South Korea AI Basic Act vs EU AI Act: a practical comparison
Many global companies will need to map both regimes. While both use risk concepts and transparency duties, South Korea’s framework is often described as narrower in who it directly regulates (with a strong focus on high-impact areas) and paired with strong industrial policy support.
| Topic | South Korea AI Basic Act | EU AI Act |
| Legal scope | National law (single country) | EU-wide regulation across member states |
| Start date | Jan. 22, 2026 | Phased, with different obligations starting earlier; broader application begins Aug. 2, 2026 |
| Risk concept | “High-impact AI” sector list + powerful-system safety track | Multi-tier risk model (including prohibited practices and high-risk obligations) |
| Penalties (headline) | Administrative fines described up to KRW 30 million for specified violations | Larger penalties possible, including turnover-based fines depending on breach |
What companies can do now (before Jan. 22, 2026)?
Organizations that build, deploy, or sell AI in South Korea can prepare without waiting for every detail in the enforcement decree.
Practical steps:
- Map whether your AI falls into “high-impact” sectors (especially employment/credit decisions, biometrics, health, transport, public services).
- Inventory generative AI features that produce text, images, audio, or video for users, and plan for notice/labeling workflows.
- Plan documentation early (risk controls, testing, incident handling, and vendor management).
- Watch for enforcement decree thresholds if you train high-performance models with very large compute.
- Confirm local agent exposure if you provide AI services into Korea without a local establishment and meet user/sales triggers.
With the South Korea AI Basic Act set to take effect on January 22, 2026, South Korea is moving into a new phase of AI governance that blends industry-building policies with targeted safeguards for high-impact uses and generative AI. The next major signal for businesses will be the final shape of the enforcement decree and official guidelines that define thresholds, exemptions, and how labeling and notice must work in real products.






