Search All Site Content

Total Index: 6854 publications.

Subscribe to our Mailing List!

Sign up for our mailing list to keep up to date on all the latest developments.

The Peninsula

Deepfakes and Korean Society: Navigating Risks and Dilemmas

Published October 3, 2024
Category: Technology

South Korea is facing some challenges as it integrates new digital technologies. A year ago, Seoul unveiled its “Digital Bill of Rights,” which seeks to preserve the rights and freedoms of Korean citizens online. But while it calls for building a “safe and trustworthy digital society,” this goal has been challenged by the rise of deepfake content in Korean society. Deepfake technology is defined by the US government as media that has been “synthetically created or manipulated using some form of machine or deep learning (artificial intelligence) technology.” While some examples of deepfakes are innocuous and can be easily identified, others have been used for malign purposes and can fool even the most diligent observers. As this type of technology becomes easier to access, there are still areas of concern that Korea must consider in order to minimize its use for negative ends.

Deepfakes and Their Impact on Korean Society

The current focus on deepfake technology began with the takedown of a Telegram group that was sharing content depicting unwilling victims. According to a report in The Wall Street Journal, the group had around 1,200 members and began sharing images in 2020, which included identifiable information about the victims and pornographic deepfake images. Victims ranged from as young as elementary school children to university students. Women serving in the Korean armed forces were also affected. Noting the variety of victims, President Yoon Suk-yeol noted that “anyone can be a victim” during a cabinet meeting. “Deepfake videos may be dismissed as mere pranks, but they are clearly criminal acts that exploit technology under the shield of anonymity,” he said. As women began deleting online photographs to avoid their misuse, the ruling conservative party announced it would form a parliamentary task force to address the issue. JoongAng Ilbo also reported that police across Korea would step up efforts to detect and delete deepfake content, as well as hold classes educating students about the issue.

Deepfake technology has already been a contentious issue in the country due to its effect on the democratic process. In 2022, then presidential candidate Yoon deployed an AI-generated avatar to connect with younger voters. Although his rival, Lee Jae-myung, initially criticized the technology, he also created his own version. But after President Yoon and opposition leader Han Dong-hoon were the subjects of manipulated media that could have misled voters, the Yoon administration took action to regulate the electoral use of deepfake content. At the end of 2023, the Korean National Assembly revised the Public Official Election Act to prohibit deepfake and manipulated media 90 days before an election. Violating this can lead to a maximum of seven years in prison and a 50 million won fine. Additionally, the National Election Commission requires campaigns to disclose the use of AI-generated content.

Legislative and Regulatory Developments on Deepfake Content

As a widespread and sensitive issue, it is commendable that the South Korean government is also acting to regulate the use of deepfakes to create malicious sexual content. At the end of September, a National Assembly committee passed a bill that would levy a prison term of up to three years or a 30 million won fine for people who create or consume deepfake sexual content. They also passed a bill strengthening punishments for using sexual materials to coerce minors. Ahead of these legislative actions, lawmakers underlined the bipartisan support for addressing the harms caused by deepfake-created content. “The ruling and the opposition parties joined hands to ease the concerns of the people about the rising deepfake cases,” said ruling party Representative Kim Sang-wook. These new laws will strengthen existing prohibitions if they are quickly applied to companies in addition to individual users. For example, the amended Telecommunications Business Act and Act on Promotion of Information and Communications Network Utilization and Information Protection require internet service providers to remove certain types of sexual content. But apps like Telegram are not covered by that law because chat rooms are considered private. Korean policymakers or law enforcement officials should clarify if companies can be held liable for storing deepfake material on their servers. If so, this would be useful in closing the space in which deepfake creators operate.

In addition to legislative action, the Korean government also needs to increase its outreach to the private sector. Another reason deepfake content proliferated on Telegram was that it had little incentive to respond to the Korean government. Its leadership and servers are based outside of Korea, so official requests to delete offending content were ignored by the company. Only after the Korean media began reporting on the issue did Telegram begin to respond to requests sent by the Korea Communications Standards Commission. It was also reported that in early September, the agency established a hotline with Telegram to streamline communications with the company. To be sure, Korea-based companies like Kakao and Naver say they are also intensifying efforts to remove offensive deepfake materials from their networks. But rather than an ad-hoc approach to internet-based companies, Korea should take a more comprehensive one. That could be achieved through legislative and regulatory efforts to require compliance with Korean law or through consolidating focus within the Korean government itself.

This leads to the need for Korea to also develop cooperation with other states to tackle the issue. The borderless nature of the internet means problems rarely can be solved through individual state action. At the end of September, the United Nations General Assembly adopted the Pact for the Future. This multilateral agreement also included the Global Digital Compact, which calls on digital technology companies to “develop solutions and publicly communicate actions to counter potential harms…from artificial intelligence-enabled content.” Before the adoption of the Pact, the South Korean representative mentioned Seoul’s efforts to establish norms for the use of artificial intelligence in the private sector. Since deepfake content and synthetic media were not specifically raised in the agreement, South Korea should take the lead in highlighting the ways in which this technology can be used for malicious ends.

Balancing the Dangers and Opportunities of Deepfake Content

Because deepfake technology raises significant challenges to society, policymakers will need to carefully craft regulations on it. Such technologies can be used for the malign purposes discussed above, but they also have more benign applications. The Responsible Artificial Intelligence Institute observes that deepfakes can be used to help people with disabilities communicate or provide critical training for medical professionals. Policymakers will also need to avoid taking actions that end up infringing on the rights of their citizens. Other commentators have warned that efforts to crack down on the misuse of AI technology can have their own unintended consequences, highlighting the Korean government’s actions toward Kakao after the 2014 Sewol Ferry incident and the Yoon administration’s tussles with Korean news agencies. There is always a risk that legitimate concerns about and measures against the malicious use of deepfake technology could be politicized or used for partisan political reasons.

AI and related technologies are getting more lifelike and easier to deploy. While they may have positive uses, they can also be used for malign ends, as demonstrated by the deepfake pornography crisis. Korean policymakers should continue to invest in this sector of the technology industry, which presents opportunities for Korean businesses, while also working to identify and mitigate the challenges that clearly harm Korean society.

Terrence Matsuo is a Non-Resident Fellow at the Korea Economic Institute of America. The views expressed here are the author’s alone.

Photo from Shutterstock.

KEI is registered under the FARA as an agent of the Korea Institute for International Economic Policy, a public corporation established by the government of the Republic of Korea. Additional information is available at the Department of Justice, Washington, D.C.

Return to the Peninsula

Stay Informed
Register to receive updates from KEI