A Journal of Analysis and News
Image generated by Grok of China’s President Xi Jinping and US President Donald Trump hugging.
By Arhum Naweed
Waking up in a highly digitalized era, should one question technological thresholds? Is it progressive to consider its nature to be inherently bad? Does it not place us, a decade behind? We’re worrying about the right things in a wrong context. The user is what defines the nature of the technology. One must question the ethical stance of the practitioner and the merits of entertainment. Deepfakes, in a recent turn of events, are suddenly the most widely-spread and intriguing form of entertainment. A hyper-realistic but artificially generated, audio, video or an image to either portray innovation or cause manipulation, is the most emergent form of challenge to logic and sense.
Times already massively influenced by post-truth are now being synthesized with deepfakes. Individuals prefer information that aligns with their pre-existing beliefs, their identities and narratives rather than factual and objective information. This causes the truth to become fragmented and personalized, fueling polarization and deepening biases in all and any contexts. A tool to amplify this, is the integration of deepfakes. By blurring the boundaries between real and fake, content created artificially, toys with the emotions and feeds onto the deep-rooted societal issues. Issues, that are too complex and fragile, already. Deepfakes distort reality and shape public opinion in more ways than one can comprehend. Politics and democracy, being at the center. Manipulating the algorithms to target particular audiences and create a carefully calculated impact, although often short-term, but deeply effective at that point, deepfakes can be considered notorious weapons of the modern era or a new actor in the 5th generation warfare.
AI generated audios, videos and images are increasingly being used to create political manipulation, especially during election periods. In early 2024, fake robocalls using the voice of Joe Biden were received by voters in New Hampshire, discouraging them from voting in the democratic primary. Instances like these, bring the diversity of the dangers of exploitation of AI, into the spotlight. Although, they are often debunked timely but a long-term risk still persists. From fake and heavily edited speeches that misrepresent candidates’ policies and beliefs to statements meant to divide or confuse voters during elections, deepfakes pose a great threat to the stability of democracy.
Both foreign adversaries and internal political groups often use deepfakes as a source of creating mistrust and invalidation towards their opponents, weakening their support and popularity and planting certain doubts among the general public. An even threatening factor to this is how convenient it is to stoke, ethnic, religious and ideological divides. A singular statement, a mere gesture, could easily ignite sentiments and turn into something horrendous.
High connectivity means a higher chance of skepticism, either in government institutions or media outlets. Here, to make matters worse, deepfakes often create deep confusion and panic among the public. Questions regarding the authenticity of journalism arise and the concept of whataboutism comes into action. Misinformation spreads faster than what is legitimate and hence, the frenzy continues. A modern phenomenon, ‘whataboutism’ takes accountability out of the equation and shifts focus from the core issues to something that holds not much substance, even if it were to be legitimate.
Truth becomes debatable with claims such as, “what seems true to you, might not hold the same meaning for us”. Is the truth debatable and not fact-based? How shall one put their trust in facts when facts can be manipulated and spread in a matter of seconds? Deepfakes bring plausible deniability into the picture. Real evidence can be denied by claiming that it’s fake. That’s the liars’ dividend in action.
People with higher emotionality are more likely to fall for the ruse of deepfakes as compared to those with higher rationality. Certain factors such as age, demographics and ideologies play a significant role in engagement of the public with deepfakes. People from certain groups are more likely to consume content that aligns with their traditional mindsets, rather than broadening the horizon. A chance for dialogue or civic engagement further weakens when their beliefs are strongly validated by widely shared content. This poses a huge risk to vulnerable groups such as minorities or activists. A thirty-second video or an audio, comprised of something scandalous could start a chain of offenses towards these vulnerable groups in the name of protecting honor and morality. In volatile political environments, even a single deepfake can initiate mass protests and retaliatory violence before it’s debunked. Civil unrest shakes the foundation of democratic institutions, at large. Democratic societies are highly impacted when truth becomes a matter of opinion.
The first and foremost solution to this rising issue has to be rebuilding public’s trust in government institutions. Believing that the government will do the right thing, the right steps will be taken to preserve national interest and being aware of the significance of the preservation of national interest could be the first step towards battling misinformation. Followed by, improving cognitive abilities among individuals. Critical thinking and evaluation play an impactful role in putting a halt to the spread of such content. Flexibility to expand and update beliefs when presented with new and credible evidence should be adopted.
Development of deepfake detection tools and certain badges and watermarks to confirm authenticity, should be brought into consideration. A lack of sensitive and responsible journalism and platform accountability has been at the core of this issue. For instance, Elon Musk’s decision to sell verification badges on X deeply contributed to the spread of misinformation through various channels under the guise of being completely authentic.
Furthermore, certain laws and amendments should be introduced into the global system. For example, laws such as the EU AI Act, which was the first major regulatory reform that was applicable globally. The act requires amendments such as fines and restrictions on deepfakes, along with clear transparency and labeling of such content.
Deepfakes act as a slow-burning crisis for democracy and state stability. In a time where fabricated voices and images can convincingly mimic reality, the foundation of any democratic society is greatly at risk. For if, seeing is no longer believing, democracy itself might become a hollow performance.
Arhum Naweed is working on a Bachelors in Peace and Conflict Studies from National Defence University, Islamabad, Pakistan. She is currently an intern at the Ministry of Foreign Affairs, Islamabad, Pakistan. Her areas of interest are contemporary global affairs, non-traditional security threats and peacebuilding measures.
Your email address will not be published.
Büyükada—Prinkipo Island just off Istanbul’s coast—is a place where even solitude carries a quiet dignity, where time seems to slow
China and all countries involved in illegal fishing need to be shamed into observing the fishing rights of all countries,…
this article is very useful, thank you for making a good article
TRUE : “If BRICS truly wants to reshape the financial order, it must start by getting its own house in…
President Xi Jinping plays his chess on the world chessboard canvas through his clever moves of the Belt and Road…
A balanced logical article coming from the author Ian Proud who has the ground experience of Russia. But why can’t…