Search All Site Content

Total Index: 6336 publications.

Subscribe to our Mailing List!

Sign up for our mailing list to keep up to date on all the latest developments.

The Peninsula

Is Korea Showing the Future of Elections: AI Candidates?

Published March 16, 2022
Author: Dongwoo Kim
Category: Korea Abroad

In December 2021, Yoon Suk-yeol’s campaign released a short candidate clip. Looking and sounding just like the candidate, the ‘man’ in the video says, “hello, it’s ‘AI Yoon Suk-yeol,’ are you surprised at how much I look like [him]?”  Since then, the campaign would go on to release short clips of the candidate responding to questions or requests, ranging from serious inquiries about Yoon’s platform to giving shout-outs to specific individuals or organizations.

For instance, when asked who he would save if both Moon Jae-in and Lee Jae-myung were drowning in water, AI Yoon responds in his monotonous voice that he would “cheer for both of them from far, far away.” Or to a question about hiring individuals in the Democratic Party, he says “I will not consider the party but the person and their ability.” Most of the clips generally respond to personal questions that seek to make him more approachable to the public (e.g., “did you go shop at E-mart today?”).

 

In this initiative, the campaign has used the deepfake technology to produce clips of AI Yoon that looks and sounds like him. AI Yoon is tightly managed by a team of young developers in the People Power Party campaign. The responses to the questions are screened by Lee Joon-suk, the leader of the People Power Party. Yoon read about 3,000 sentences for 20 hours to provide the training footage.

Closer to the March 9 election day, Yoon’s team started to send out clips of AI Yoon to the supporters of the Party across the country via text, with regionally tailored messages. The use of ‘AI candidates’ demonstrates their (theoretical) benefits from the political operations perspective, which allows them to ‘connect’ with the public and spread their messages more widely with less constraint.

Controversy: AI and Democracy

Predictably, AI Yoon came under severe scrutiny shortly after the launch. For instance, it was pointed out that AI Yoon does not shake his head as the actual candidate does, a well-known habit of the former prosecutor general that is seen as a weakness (the public called Yoon “Yoon dori-dori,” which roughly translates to “shakey-shakey Yoon”). Its critics argued that the use of “AI candidates” would lead to an overtly polished representation of the politicians that deviate from reality, undermining the trust of the electorate and furthering the sense of disconnect between the political elites and the public.

Further, the use of ‘AI Yoon’ underscores further challenges ahead. In many ways, the use of AI Yoon was a rather primitive, limited use of AI technology: both questions and responses were tightly screened by the campaign, and the training data was explicitly provided for this purpose, which limits ethical concerns (relatively so). However, the normalization of such practice, with advances in technology, could raise more serious ethical issues that pose threats to the democratic process. For instance, what if a campaign in 2027 decides to integrate the chatbot function to the ‘AI candidate’ so that voters could have a more “personalized” interaction with the candidate? As seen in incidents with Microsoft’s Tay or Lee Luda, deep learning programs could be easily manipulated by malicious actors to spread hate speech or misinformation. Also, what would be the effect of normalizing these “AI candidates” in the political process?

The response to AI Yoon suggests that the use of deepfakes by candidates will not end in this election cycle. Lee Jae-myung also launched “AI Jae-ming,” which presented 226 regionally targeted platform points around the country. Lee’s campaign, which called AI Yoon a manifestation of “digital authoritarianism,” claimed that their AI candidate mitigated the concerns of misrepresentation through a more “accurate” depiction of the candidate. Similarly, Kim Dong-yeon, the leader of the New Wave Party, had launched an AI spokesperson and his “avatar” in December. The Election Commission intervened on January 11, 2022 but did not go too far. The Commission clarified that while the use of deepfakes is not categorically banned, all clips featuring AI candidates must bear an insignia that indicates they are not real candidates.

Differing Attitudes about AI?

On the election day, JTBC “interviewed” deceased former Korean presidents like Park Chung-hee or Roh Moo-hyun in their coverage, using the same deepfake technology. Perhaps this underscores the Korea-specific comfort with the use of virtual personae and that the use of AI candidates in this election is a manifestation of a broader trend.

Data suggests that South Koreans may be more comfortable with AI personalities. A 2018 Oxford study indicated that East Asians had a substantially more favourable perception of AI, with 59% indicating that AI would “mostly help” and 11% “mostly harm,” compared to North America (41% “help” and 47% “harm”) and Europe (38% “help” and 43% “harm”). A 2021 South Korean public opinion poll showed even greater support for the use of AI: 86% said that AI will have a “positive impact” on their personal life and society, while only 14% viewed it negatively.

Further, South Koreans have had greater exposure to “virtual” personae. In 1998, a “Cyber singer” called “Adam” became a sensation, performing alongside the first generation of K-pop stars like H.O.T. or Fin.K.L. Through the 2000s, Korean internet users invested substantially in their online “avatars” on Cyworld, BuddyBuddy, or Daum. More recently, Korean companies started to employ “AI celebrities” like Rozy, an Instagram influencer with over 120,000 followers, now getting ad deals with Shinhan Life and Calvin Klein. In this context, the use of AI by politicians is not a surprise.

Conclusion

The use of AI candidates in this election yet again underscores South Korea’s tech-savviness and the willingness to embrace new technologies – a product of unique socio-cultural attitudes, government policies that have promoted the adoption of “4th Industrial Revolution” technologies, and the hyper-aggressive private sector. In the U.S., states like California have banned deepfakes during the election period, and more conservative attitudes towards AI will likely prevent the adoption of such a practice, at least in the short term. But the overall trend demonstrates that attitudes could change drastically within years, especially with the global interest in issues like Metaverse or cryptocurrencies. In this context, proactive policies that delineate and protect the core social rights and values will be critical for tempering the impact of new technologies on democratic institutions and processes, both in the U.S. and South Korea.

Dongwoo Kim is a Contributing Author at the Korea Economic Institute of America and a J.D. candidate and Massey College Junior Fellow at the University of Toronto. The views expressed here are the author’s alone.

Photo from screengrab of AI Yoon on YouTube.

Return to the Peninsula

Stay Informed
Register to receive updates from KEI