[ad_1]
“There is no need to die in this war. I advise you to live,” intoned the solemn voice of Ukrainian president Volodymyr Zelensky in one of the videos that went viral in March 2022, after Russia’s full invasion of Ukraine.
Zelensky’s video was followed by another in which Russian counterpart Vladimir Putin spoke of a peaceful surrender. Although they were of low quality, they spread quickly, creating confusion and conveying a distorted narrative.
In the digital universe, where the boundaries between reality and fiction are increasingly blurred, deepfakes continue to challenge our screens. Since the beginning of the war between Russia and Ukraine, deep fakes have been weaponised, infiltrating every corner of social media.
Despite the almost immediate reactions and debunking that followed, their circulation has been more pronounced in non-English speaking countries. These regions are more exposed to disinformation due to the lack of debunking tools, which are more advanced for the English language.
“We are very visual creatures; what we see influences what we think, perceive, and believe,” argues Victor Madeira, journalist and expert on Russian counter-intelligence and disinformation. “Deepfakes represent just the latest weapon designed to confuse, overwhelm, and ultimately cripple Western decision-making and our will to react.”
While the goal is to undermine trust in information, media, and democracy, there is a lack of proactive policies to prioritise user protection. However, the power derived from this manipulation attracts online platforms, which are not legally obliged to monitor, detect, and remove malicious deepfakes.
“As companies, they engage in massive competition to expand into new markets, even when they do not have the necessary infrastructure to protect users,” says Luca Nicotra, Campaign Director of the NGO Avaaz, which specialises in investigating online disinformation.
“There are several quality assurance networks that annually review these fact-checkers, ensuring they are independent third parties adhering to professional standards. Another alternative is to monitor the main information and disinformation sources in various countries with databases like NewsGuard and the Global Disinformation Index. It can be costly,” Nicotra says. Platforms prefer to lower their costs if having these tools is not fundamental.
Deepfake creation
Developments in generative artificial intelligence have raised concerns about the technology’s ability to create and spread disinformation on an unprecedented scale.
“It’s getting to a point where it becomes hard for people to tell if the image they receive on their phone is authentic or not,” argues Cristian Vaccari, professor of political communication at Loughborough University and an expert in disinformation.
Content produced initially by a few simple means may appear of low quality but, through necessary modifications, can become credible. A recent example involves US president Joe Biden’s deepfake voice urging citizens not to vote.
Similarly, the world’s longest-serving central bank governor, Mugur Isarescu, was the target of a deepfake video depicting the policymaker as promoting fraudulent investments.
“Tools already exist to produce deepfakes even with just a text prompt,” warns Jutta Jahnel, a researcher and expert in artificial intelligence at the Karlsruhe Institute of Technology. “Anyone can create them; this is a recent phenomenon. It is a complex systemic risk for society as a whole.” A systemic risk whose boundaries have already become difficult to delineate.
According to the latest report by the NGO Freedom House, at least 47 governments around the world — including France, Brazil, Angola, Myanmar and Kyrgyzstan — have used pro-government commentators to manipulate online discussions in their favour, double the number from a decade ago. As for AI use, “over the past year, it has been used in at least 16 countries to sow doubt, denigrate opponents or influence public debate.”
According to experts, the situation is worsening, and it is not easy to identify those responsible in an environment saturated with disinformation caused by war.
“The conflict between Russia and Ukraine is causing increased polarisation and motivation to pollute the information environment,” says EU cybersecurity agency (ENISA) expert Erika Magonara.
Through the analysis of various Telegram channels, it emerged that the profiles involved in such content dissemination have specific characteristics. “There is a kind of vicious circle,” explains Vaccari, “people who have less trust in news, information organisations and political institutions become disillusioned and rely on social media or certain circles, following a ‘do your own research’ approach to counter information.” The problem involves not only the creators but also the disseminators.
Pro-Kremlin propaganda
“Online disinformation, especially during election periods and linked to pro-Kremlin narratives, remains a constant concern,” reports Freedom House in its section dedicated to Italy. The same trend goes for the latest related to Spain.
Since the beginning of the war, Russia has worked on Facebook to spread its propaganda through groups and accounts created for this purpose. An analysis of the various Telegram channels operating in Italy and Spain confirmed this trend, revealing inclinations towards extreme right-wing ideologies and anti-establishment sentiments. These elements have provided fertile ground for pro-Kremlin propaganda. Among the most widespread narratives are theories denying the Bucha massacre, claiming the existence of American bio-laboratories in Ukraine, and promoting the denazification of Ukraine.
A widespread tendency has been the creation of deepfakes to parody the political protagonists of the war, causing personal defamatory harm as the main consequence. A recent study by the Lero Research Centre at University College Cork on Twitter confirmed this effect. It stated that “individuals tended to overlook or even encourage the damage caused by defamatory deepfakes when directed against political rivals.”
Targeting reality as if it were a deepfake has negative consequences on the perception of truth. It reflects another deepfakes result in an already manipulative information environment — what academics call the ‘liar’s dividend’.
Sign up for EUobserver’s daily newsletter
All the stories we publish, sent at 7.30 AM.
By signing up, you agree to our Terms of Use and Privacy Policy.
Another trend identified is the absence of debunking on Telegram. On the morning of 16 March 2022, the first political deepfake spread disinformation in a conflict context, underlining the potential impact of deepfakes. Such content fuelled conspiratorial beliefs and generated harmful scepticism. This phenomenon occurs more frequently in certain countries.
Disinformation in Italy and Spain
The lack of adequate countermeasures further endangers a digital environment besieged by deepfakes. It is the case in Spain and Italy, where “there are twice as many misinformation situations, but limited resources to monitor this phenomenon,” Nicotra argues.
A 2020 report highlighted this trend, indicating that Italian and Spanish-speaking users may be more exposed to disinformation. “Social networks detect only half of the fake posts because they have little incentive to invest in other languages.” Most of the debunking is for the English language.
“Right now, it is a competitive disadvantage for any company to stop providing users with misinformation and polarised content,” Nicotra argues.
Telegram is one of the platforms in this context. Moreover, of all 27 EU countries, Italy and Spain utilise it the most to obtain information: 27 percent and 23 percent, respectively.
Russian disinformation data show a worrying reality that further encourages the spread of certain narratives within these information bubbles. As Madeira explains, Mediterranean states are being ‘soft’ on Russia and are even more lenient on security issues. Faced with this lack of transparency and control over disinformation, the European Union has tried to intervene by promoting various laws on content regulation.
What the EU still has to do
The AI Act, which was recently finalised by co-legislators, is the first-ever EU law focusing on artificial intelligence.
One of the measures it includes is the labelling of disinformation to counter the effectiveness and hinder the generation of illegal content. “It introduces obligations and requirements graduated according to the level of risk to limit negative impacts on health, safety, and fundamental rights,” explains socialist MEP Brando Benifei, who has been leading the parliament’s work on the file.
There may be a need to persuade social media and other platforms to prohibit specific content generated by artificial intelligence before creating them instead of applying the labelling afterwards, believes Benifei.
“What is changing is the level of responsibility that EU institutions are increasingly—and rightly — placing on platforms that amplify this content, especially when the content is political,” Benifei said.
“If you accept deepfakes on your platform, you are responsible for that content. You are also responsible for the structural risks because you act as an amplifier of this disinformation,” argues Dragos Tudorache, liberal MEP and co-rapporteur on the file.
Despite the publication of the European Digital Services Act, which establishes the basis for controlling disinformation on social media, and the approval of the AI Act, “AI has made disinformation a trend, facilitating the creation of false content,” says ENISA’s Magonara.
The deepfake represents a warfare technique designed to feed types of discourse and shared stereotypes. In a conflict that shows no signs of ending, as Magonara argues, “the real target is civil society.”
The production of this investigation is supported by a grant from the IJ4EU fund
[ad_2]
Source link