Topics In Demand
Notification
New

No notification found.

Deepfakes - How Serious Is The Concern?
Deepfakes - How Serious Is The Concern?

May 22, 2023

217

0

Generative AI is a type of artificial intelligence that has the ability to generate various types of data including text, image, audio, and video among others. One of the widely used applications of generative AI is the creation of deepfakes, which are usually computer-generated videos that look and sound like real. While deepfakes can be used for entertainment purposes, they also pose a significant risk to society, as they can be used to spread misinformation and even manipulate public opinion. According to data from Sensity, the number of deepfake videos online rose from 7,964 in Dec 2018 to 85,047 in December 2020.

According to an article on the World Economic Forum, this number has been increasing at an estimated annual rate of around 900%. A survey conducted by Statista in 2022 revealed that only 57% of the 16,000 respondents claimed that they can detect a deepfake video. The remaining 43% respondents said that they would not be able to spot or tell that difference between a deepfake video and a real video. The survey was conducted in August 2022, and the sample size consisted of respondents from across the globe. Although the risks associated with deepfakes might not seem significant in the present, the application area of generative AI continues to mount concerns among governments and various other parties.

Risks Associated with Deepfakes:

  • Misinformation: Deepfakes can be used to create false information, which can be spread widely and quickly through social media. For example, a deepfake video of a politician saying something they did not actually say could go viral and have a significant impact on the public’s perception of that politician.
  • Impersonation: Deepfakes can be used to impersonate individuals, which can have serious consequences. For example, a deepfake video of a CEO announcing a major change in company policy could cause panic among employees and shareholders.
  • Privacy: Deepfakes can be used to create fake images and videos of individuals, which can violate their privacy and even lead to harassment or blackmail.
  • National Security: Deepfakes can be used to create fake news and propaganda, which can have serious implications for national security. For example, a deepfake video of a foreign leader making a threatening statement could be used to escalate tensions between countries.

 

Possible Ways to Address the Risks:

  • Education: One way to address the risks associated with deepfakes is through education. By educating people about the existence and potential dangers of deepfakes, individuals can learn to identify them and avoid spreading them.
  • Authentication and Detection Technology: Another way to address the risks of deepfakes is through the development of authentication and detection technologies. AI algorithms can be developed to detect and flag deepfakes, which can help to prevent their spread. Technologies like blockchain can be used to create digital signatures that can be used to verify the authenticity of the data. The sudden increase in the number of research papers related to deepfakes shows the inclination of researchers towards the area.
  • Legislation: Governments can also pass legislations to regulate the creation and circulation of deepfakes. These legislations could require that deepfakes or any such content should be clearly labeled as AI-generated, and allow for imposition of penalties on their creation and dissemination without proper consent. For instance, China has emerged as a proactive nation in enacting dedicated legislation in this field. Starting from January 2020, Chinese authorities have mandated publishers to disclose whether their content was generated through AI or VR technologies, and failing to do so is considered a violation of the law.

As the generative AI landscape continues to evolve, and generation of content such as images, videos and audio picks up pace, we might see further increase in the number of deepfakes over the internet in the future. As difficult as it apparently is to regulate this application area of generative AI, governments will have to put some guard rails around it to prevent any significant damage. Can generative AI itself be a part of the solution to this problem?


That the contents of third-party articles/blogs published here on the website, and the interpretation of all information in the article/blogs such as data, maps, numbers, opinions etc. displayed in the article/blogs and views or the opinions expressed within the content are solely of the author's; and do not reflect the opinions and beliefs of NASSCOM or its affiliates in any manner. NASSCOM does not take any liability w.r.t. content in any manner and will not be liable in any manner whatsoever for any kind of liability arising out of any act, error or omission. The contents of third-party article/blogs published, are provided solely as convenience; and the presence of these articles/blogs should not, under any circumstances, be considered as an endorsement of the contents by NASSCOM in any manner; and if you chose to access these articles/blogs , you do so at your own risk.


images
Dhiraj Sharma
Principal Analyst

© Copyright nasscom. All Rights Reserved.