New online videos recently investigated by VOA’s Russian and Ukrainian services show how artificial intelligence is likely being used to try to create provocative deepfakes that target Ukrainian refugees.
In one example, a video appears to be a TV news report about a teenage Ukrainian refugee and her experience studying at a private school in the United States.
But the video then flips to footage of crowded school corridors and packets of crack cocaine, while a voiceover that sounds like the girl calls American public schools dangerous and invokes offensive stereotypes about African Americans.
“I realize it’s quite expensive [at private school],” she says. “But it wouldn’t be fair if my family was made to pay for my safety. Let Americans do it.”
Those statements are total fabrications. Only the first section — footage of the teenager — is real.
The offensive voiceover was likely created using artificial intelligence (AI) to realistically copy her voice, resulting in something known as a deepfake.
And it appears to be part of the online Russian information operation called Matryoshka —‚ named for the Russian nesting doll — that is now targeting Ukrainian refugees.
VOA found that the campaign pushed two deepfake videos that aimed to make Ukrainian refugees look greedy and ungrateful, while also spreading deepfakes that appeared to show authoritative Western journalists claiming that Ukraine — and not Russia — was the country spreading falsehoods.
The videos reflect the most recent strategy among Russia’s online disinformation campaign, according to Antibot4Navalny, an X account that researches Russian information operations and has been widely cited by leading Western news outlets.
Russia’s willingness to target refugees, including a teenager, shows just how far the Kremlin, which regularly denies having a role in disinformation, is prepared to go in attempting to undermine Western support for Ukraine.
Targeting the victims
A second video targeting Ukrainian refugees begins with real footage from a news report in which a Ukrainian woman expresses gratitude for clothing donations and support that Denmark has provided to refugees.
The video then switches to generic footage and a probable deepfake as the woman’s voice begins to complain that Ukrainian refugees are forced to live in small apartments and wear used clothing.
VOA is not sharing either video to protect the identities of the refugees depicted in the deepfakes, but both used stolen footage from reputable international media outlets.
That technique — altering the individual’s statements while replicating their voice — is new for Matryoshka, Antibot4Navalny told VOA.
“In the last few weeks, almost all the clips have been built according to this scheme,” the research group wrote.
But experts say the underlying strategy of spoofing real media reports and targeting refugees is nothing new.
After Russia’s deadly April 2022 missile strike on Ukraine’s Kramatorsk railway station, for example, the Kremlin created a phony BBC news report blaming Ukrainians for the strike, according to Roman Osadchuk, a resident fellow at the Atlantic Council’s Digital Forensic Research Lab.
During that same period, he noted, Russia also spread disinformation in Moldova aimed at turning the local population against Ukrainian refugees.
“Unfortunately, refugees are a very popular target for Russian disinformation campaigns, not only for attacks on the host community … but also in Ukraine,” Osadchuk told VOA.
When such disinformation operations are geared toward a Ukrainian audience, he added, the goal is often to create a clash between those who left Ukraine and those who stayed behind.
Deepfakes of journalists, however, appear designed to influence public opinion in a different way. One video that purports to contain audio of Bellingcat founder Eliot Higgins, for example, claims that Ukraine’s incursion into Russia’s Kursk region is just a bluff.
“The whole world is watching Ukraine’s death spasms,” Higgins appears to say. “There’s nothing further to discuss.”
In another video, Shayan Sardarizadeh, a senior journalist at BBC Verify, appears to say that “Ukraine creates fakes so that fact-checking organizations blame Russia,” something he then describes as part of a “global hoax.”
In fact, both videos appear to be deepfakes created according to the same formula as the ones targeting refugees.
Higgins tells VOA that the entirety of the audio impersonation of his own voice appears to be a deepfake. He suggests the goal of the video was to engage factcheckers and get them to accidentally boost its viewership.
“I think it’s more about boosting their stats so [the disinformation actors] can keep milking the Russian state for money to keep doing it,” he told VOA by email.
Sardarizadeh did not respond to a request for comment in time for publication.
Fake video, real harm
The rapid expansion of AI over the past few years has drawn increased attention to the problem of deepfake videos and AI images, particularly when these technologies are used to create non-consensual, sexually explicit imagery.
Researchers have estimated that over 90% of deepfakes online are sexually explicit. They have been used both against ordinary women and girls and celebrities.
Deepfakes also have been used to target politicians and candidates for public office. It remains unclear, however, whether they have actually influenced public opinion or election outcomes.
Researchers from Microsoft’s Theat Analysis Center have found that “fully synthetic” videos of world leaders are often not convincing and are easily debunked. But they also concluded that deepfake audio is often more effective.
The four videos pushed by Matryoshka — which primarily uses deepfake audio — show that the danger of deepfakes isn’t restricted to explicit images or impersonations of politicians. And if your image is available online, there isn’t much you can do to fully protect yourself.
Today, there’s always a risk in “sharing any information publicly, including your voice, appearance, or pictures,” Osadchuk said.
The damage to individuals can be serious.
Belle Torek, an attorney who specializes in tech policy and civil rights, said that people whose likenesses are used without consent often experience feelings of violation, humiliation, helplessness and fear.
“They tend to report feeling that their trust has been violated. Knowing that their image is being manipulated to spread lies or hate can exacerbate existing trauma,” she said. “And in this case here, I think that those effects are going to be amplified for these [refugee] communities, who are already enduring displacement and violence.”
How effective are deepfakes?
While it is not difficult to understand the potential harm of deepfakes, it is more challenging to assess their broader reach and impact.
An X post featur phony videos of refugees received over 55,000 views. That represents significant spread, according to Olga Tokariuk, a senior analyst at the Institute for Strategic Dialogue.
“It is not yet viral content, but it is no longer marginal content,” she said.
Antibot4Navalny, on the other hand, believes that Russian disinformation actors are largely amplifying the X posts using other controlled accounts and very few real people are seeing them.
But even if large numbers of real people did view the deepfakes, that doesn’t necessarily mean the videos achieved the Kremlin’s goals.
“It is always difficult … to prove with 100% correlation the impact of these disinformation campaigns on politics,” Tokariuk said.
Mariia Ulianovska contributed to this report.
…
ваш коментар: