Growing threat of Deepfake technology in Lok Sabha elections

img

 

Growing threat of Deepfake technology in Lok Sabha elections

GS-3: Science and Technology

(UPSC/State PSC)

16/05/2024

Source: Indian Express

Context:

Artificial Intelligence (AI) is one such technology, using which deepfake content can be created. Currently, deepfakes are in the news amid the Lok Sabha elections. This technology can become a threat in elections. It can also be used to influence public opinion.

About Deepfake Technology:

  • “Deepfakes” are fake or artificial forms of social media that can be used to mislead people through audio and visual content.
  • Deepfakes can be in both video and audio forms. It is created using a special ‘machine learning’, which is called ‘deep learning’. In this, two videos or photos are given to the computer, after seeing which it itself makes both the videos or photos similar. That is, in the edited video someone else's face is replaced with someone else's face.
  • Deepfake is a Generative Adversarial Network (GAN) based technique which consists of two competing neural networks called Generator and Discriminator. These networks help in creating realistic audio and video using artificial intelligence (AI).
  • The generator helps in creating unrealistic images or videos that look like real ones and the discriminator tries to separate the real data from the data created by the generator.
  • Learning from the discriminator's feedback, the generator improves its output until it brings the discriminator to a dilemma by displaying different results.
  • Creating a deepfake requires a large amount of data in the form of photos or videos of the source and target person, often collected from the internet or social media without that person's consent or knowledge.
  • Deepfakes are a part of deep synthesis technology, which uses other technologies, including deep learning and augmented reality, to combine text, images, audio and video to create virtual scenes.

History of Deepfakes:

Development of deepfake technology

  • This technique was initially used in 1997 by Christoph Bregler, Mitchell Cowell and Malcolm Slaney to replace the anchor's spoken words.
  • Deepfake technology was first developed in 2014 by Ian Goodflow and his team.
  • The term deepfake was first coined in late 2017 by a 'Reddit' user who used deep learning techniques to paste the faces of celebrities on porn videos.
  • If seen as an experiment, this technique has been used extensively in Hollywood films.
  • By 2018, the technology became easier to use thanks to open-source libraries and tutorials shared online, and after 2020, deepfakes became more sophisticated, making them difficult to detect.

Various challenges related to deepfakes in elections:

Manipulating the electoral process:

  • The creation of deepfake content and bombarding voters with highly personalized propaganda leads to confusion and manipulation.
  • Deepfake videos of rivals can be created using AI, which can tarnish their image and affect voters' perception of them. This could give rise to the concept of ‘deepfake elections’.

Spreading misinformation:

  • Deepfake models, especially generative artificial intelligence, can manipulate democratic processes by spreading misleading or false information.
  • For example, in the 2024 Lok Sabha elections, a clone voice of Mahatma Gandhi was created where it was revealed that Gandhiji is campaigning for a particular political party.
  • In another example, a deepfake video of a ruling party MP went viral on WhatsApp, where he is criticizing his political rival in different languages and encouraging voters to vote for the ruling party.
  • This risk is compounded by social media platforms where fact-checking and efforts to maintain election integrity are weakened.

Inaccuracies and Unreliability:

  • Various deepfake AI models, including AGI, are prone to inaccuracies and inconsistencies, raising concerns about their credibility.
  • Examples of celebrity misrepresentations by Google AI models have highlighted the potential dangers of uncontrolled AI.
  • Inconsistencies in AI models pose inherent risks to society as their use expands.

Ethical Concerns:

  • The use of deepfakes in elections raises ethical questions regarding privacy/confidentiality, transparency and fairness.
  • AI algorithms may retain biases present in the training data, resulting in unfair treatment or discrimination against certain voter groups.
  • Lack of transparency in AI decision-making processes could destroy public trust in election results.
  • Unequal access to AI resources could hinder a level playing field in elections and favor parties with more resources.

Regulatory Challenges:

  • Regulating deepfakes in election campaigns is challenging due to the rapid technological advancements and global nature of online platforms.
  • Governments and election officials struggle to keep pace with ever-evolving AI technologies and may lack the expertise to regulate AI-powered electoral activities.

Currently legal provisions for Deepfakes:

  • There is a provision for action against deepfake videos under the Indian Penal Code section and imposition of heavy fine.

Defamation case:

  • Defamation case can also be filed in case the personal image is damaged due to deepfake video.
  • If someone is insulted through deepfake, a defamation case can be filed under sections 499 and 500 of the IPC.

IT Act 2000:

  • This Act ensures provisions against data theft and deepfake videos created through hacking.
  • It provides protection to any person's privacy.
  • The responsibility of social media platforms is also fixed in the IT Act, in which it is necessary to provide protection to the privacy of an individual. If any platform gets information about any such deepfake content, it is its responsibility to remove it within twenty-four hours.

Copyright Act 1957:

  • In case of theft of any kind of content, there is a provision for action against the culprit under the Copyright Act 1957.

Section 66D:

  • Under Section 66D of the law, if found guilty in such a case, the punishment can be imprisonment of up to three years and a fine of up to ₹1 lakh.

Way Forward:

  • Broadly speaking, all laws and regulations that specifically address the creation and dissemination of deepfakes should be implemented while balancing freedom of speech and expression.
  • Media literacy should be promoted along with public awareness of the potential risks and impacts of deepfakes.
  • Essentially, various media sources and content must be verified.
  • Technological solutions and standards should be established that can detect deepfakes and prevent their spread, such as digital watermarks and blockchain.
  • Deep learning technology and synthetic media should be forced to follow ethical rules.
  • There is a need to foster collaboration and coordination among various stakeholders such as governments, media, civil society, academia and industry to address the challenges and opportunities posed by deepfakes.

Conclusion:

According to AC Nelson's 'India Internet Report 2023', there are more than 425 million internet users in rural India, which is 44 percent more than urban India and of these, 295 million people use the internet regularly. But due to lack of digital literacy and awareness, it is largely difficult to understand the future threats of deepfakes. There is a legal weapon to combat deepfakes in India, but it is important for everyone to be aware of it.

------------------------------------------------------

Mains Question:

Discuss the development, impact, challenges, legal provisions and solutions of deepfake technology.