How should Europe and America respond to AI’s deep forgery that disrupts the election?

In June 2023, a team supporting Republican rising star and Florida Governor Ron de Santis released a video on X platform that was unrelated to former leader Donald Trump. At that time, De Santis had just announced his candidacy for the 2024 US election and was Trump’s biggest collaborator within the Republican Party.


This video is a mix and match of Trump’s “You’re fired” moments during his time on reality shows, contrasting with Trump’s stance on Anthony Wocky, the director of the National Institute of Allergy and Epidemic Research in the United States, during his leadership.
At the beginning of the COVID-19 epidemic, Ouqi was the senior health staff of the White House, and he also became the target of anti vaccine activities and left-wing conservatives. Although he was at odds with Zaiqi, Trump repeatedly declined media interviews and stated that he would not dismiss Zaiqi.
De Santis’s team used audio from Trump’s refusal to interview with bizarre photos in the video. In the video, there is a collection of photos of Trump and Zaiqi together, with three photos depicting Trump embracing Zaiqi and kissing Zaiqi’s forehead, with the intention of highlighting Trump’s close relationship with the conservative admirer of Zaiqi.
Image source: X
But French media later discovered that the three photos of Trump and Hucky embracing were all fake photos born by AI. Similar events are not currently being recommended in Europe.
At the end of September last year, Slovakia held a People’s Assembly election. Before the voting began, there was not a single audio segment on Facebook. In the audio, it was heard that a woman from the Slovak Party leader Mikhail Shimechka was discussing with a staff member of a tributary newspaper how to use money to buy votes.
The audio was released during the 48 hour silence period before the vote, and politicians were no longer allowed to voice their opinions. In the final vote, the Slovak party lost to the left-wing party, the leaning party, in increasing the location of Chimechka. The biased party supports providing military support to Ukraine, while the Slovak party is a pro EU liberal political party.
Afterwards, the audio on Facebook was deemed to have a natural legacy of AI. Facebook mainly allows the disclosure of deeply fabricated videos and does not make unresponsive requests for audio. Deep fabrication refers to the application of deep learning algorithms to create audio, video, and images that have never been said or done by relevant personnel, or to invent people who have never survived.
2024 is the year of elections, and more than 50 countries around the world will hold elections. Candidates will use AI to deeply fabricate substance to attack political opponents, benefit groups will use fabrication to influence voter voting, and the difficulty of distinguishing between false and real information will affect the trust of voters in elections, which has become a new concern for all countries.
The EU, which is on the global front line in AI legislation, is once again powerless and preparing to stop controlling deep fabrication before the European Parliament’s election in June.
On March 14th local time, the European Commission officially issued a notice to eight platforms, including Bing, Google, Facebook, TikTok, YouTube, and X, requesting each platform to explain what steps they have taken to increase the risks unrelated to innate AI. Related dangers include AI hallucinations, which refer to AI creating false information, deeply fabricated viral transmission, and automated punishment that can mislead voters.
In response to the impact of innate AI on the selection process, dissemination of illegal information, harm to personal data, and harm to minors, the European Commission also requests various platforms to provide internal materials, which essentially include risk assessments and response measures.
The committee requests all platforms to submit materials unrelated to injury recommendations before the end of April 5th. The deadline for submitting other materials is April 26th. Platforms that fail to respond on time may be fined by the European Commission.
The EU Commission’s recent approval is based on the expired Digital Services Act. This bill affects the constraints on digital platforms and is not specifically targeted at AI. This Wednesday, the European Parliament officially voted to approve the EU’s Artificial Intelligence Act, which is also the world’s first AI law.
The EU’s AI bill has specific restrictions on deep fabrication. Article 52 of the bill stipulates that if the image, audio, and video components of the AI system that are inherently or disposed of as punishment are deeply fabricated, the system developer shall disclose that the relevant substance is AI innate or disposed of as punishment. Only when the use of relevant AI is unrelated to the investigation of criminal offenses or when literary and artistic creation is stopped, there is no need to strictly disclose it.
This law also does not specify the nature of AI text. If the text of the AI system’s innate or disposition punishment is intended to make the public aware of information unrelated to public interests, the developer should disclose that the relevant text is AI’s innate or disposition punishment. But if AI’s innate relevant texts have been reviewed or compiled by humans, and there are natural or legal persons who bear the obligation to compile substantive disclosure, there is no need to disclose that the text is AI’s innate.
Although the restrictions on deep fabrication have been lifted, the EU’s first AI bill is still undergoing final review of enforcement terminology and will only officially expire 20 days after its publication in the EU Civil Gazette. The time for the implementation of the bill varies depending on its substance, with departmental substance being fulfilled 6 months after its expiration and departmental period being 12 months. This also means that before this year’s European Parliament election, the EU’s AI law will have no choice but to exert its influence.
In the United States, where the big leadership election has already begun, this year has been stirred up multiple times by the deep fabrication of AI. Prior to the New Hampshire primary, an AI fabricated audio of Joseph Biden called on voters not to participate in the primary vote and kept their votes until the final showdown in November; AI fabricated photos depict Trump flying on Jeffrey Epstein’s plane to Lolita Island, where Epstein is accused of constructing underage girls for sexual intercourse.
After the photos of “Loli Island” became chaotic, Trump angrily spoke out, warning that AI would become a great danger in the future, and calling for the development of “strong law enforcement” to restrain AI.
But at the federal level in the United States, the implementation of AI laws across the country is still very far away. In February of this year, after the start of the US presidential primaries, the US House of Representatives announced the establishment of a cross party rest group to study legislation on AI.
The current punishment method in the United States for the deep-rooted fabrication of results that have not yet appeared in the election is for the Federal Communications Commission to issue bans based on specific circumstances. Last month, the Federal Communications Commission of the United States announced that it would allow the use of AI’s innate voice to make automatic voice calls to Deloitte, clarifying that the action would comply with the 1991 Deloitte Consumer Injury Act.
The American consumer authority structure, Public Citizen, filed a petition with the Federal Communications Commission last year, requesting the commission to amend the regulations to clarify that intentional use of AI to substantially defame campaign candidates or political parties is a legal action. The Federal Communications Commission has accepted the appeal, but has not yet made a decision.
Compared to federal agencies, states in the United States are more proactive in formulating AI laws at the state level.
Public Citizen statistics show that currently, 43 states in the United States have not proposed 70 laws to limit the use of AI in elections, and 7 of these 70 laws have become invalid. In Michigan, a bill that expired in November last year allowed individuals to disclose the innate depth of artificial intelligence (AI) fabrications that uphold candidate reputation, distort candidate positions, and attempt to influence voter action within 90 days prior to the election.
But at least 30 of these 70 bills were only proposed this year, and it will take a long process from state legislatures to formal voting. Deep fabrication is inevitable in this year’s US election, and the unknown is just how much impact these fabrications will have on voters.

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注