Topics In Demand
Notification
New

No notification found.

Tackling AI deep fakes with nuanced human intelligence
Tackling AI deep fakes with nuanced human intelligence

28

0

Prime Minister Narendra Modi recently flagged concerns on the misuse of Artificial Intelligence (AI) to create deep fakes. The problem of deep fakes has been around for some years now and it is disquieting even the advocates of AI. We all are acutely aware of how actress Rashmika Mandanna turned a guileless victim to deep fakes when her face was morphed into the body of another woman. Such distasteful incidents have the tendency to go viral in a jiffy and they can cause reputational damage to the person. AI is like fire- when controlled and tempered, it is our useful ally. But if left unrestrained, AI can be hazardous beyond imagination. For instance, the use of AI in autonomous vehicles could lead to disastrous results if the car's AI is not properly trained to detect and respond to all potential hazards on the road.

AI-powered deep fakes can create hyper-realistic audio and video content, making it harder to tell fact from fiction. It threatens the foundations of trust and truth in our digital age, from political manipulation to identity theft. Technology and human-centric solutions must be integrated into a comprehensive strategy to counter this menace. This is akin to playing a game of chess where your opponent has an unbeatable strategy. You have to find a way to outwit them and beat them at their own game. In this case, it’s using technology and human intervention to effectively combat the dangers of deep fakes.

The Limits of Technology

AI advancements can assist in developing algorithms to detect deep fakes, but the arms race between creators of deceptive content and technology developers continues to evolve. By relying solely on algorithms, malicious actors may find new ways to outsmart detection mechanisms, creating a perpetual cycle of catch-up. It is also possible for false positives to occur, potentially causing harm to innocent individuals whose content is mistakenly flagged. To address these issues, AI developers need to incorporate human-based verification to ensure that algorithms are not mistaking real content for fake. Additionally, AI developers should strive to stay ahead of malicious actors by creating algorithms that can quickly and accurately detect deep fakes without sacrificing accuracy.

Human Intelligence as a Counterbalance

Complementing technological solutions with real and intuitive human intelligence is crucial. Machines are unable to mimic human ability to discern nuances, contextualize information, and employ emotional intelligence. Deep fakes are subtle and evolving, so combining AI detection tools with human expertise ensures a more robust defense.

Digital Literacy Education: Digital content evaluation skills should be taught to individuals at an early stage. Educating people on media literacy can give them the tools to recognize signs of manipulation. A culture of skepticism and critical thinking can collectively protect society from the spread of deep fake content by fostering a culture of skepticism and critical thinking.

Crowdsourced Verification: Content authenticity can be verified by harnessing the collective intelligence of the crowd. Crowdsourcing fact-checking initiatives can be implemented by platforms and organizations, encouraging users to report and verify suspicious content. Crowds can enhance the accuracy of content verification through their diverse perspectives and expertise. By leveraging the collective intelligence of the crowd, platforms and organizations can create crowdsourced fact-checking initiatives to verify content. Moreover, crowds can provide invaluable assistance in ensuring the accuracy of content verification due to their diverse perspectives and specialized expertise.

Ethical AI Development: AI technology must be developed and deployed ethically. To strike a balance between innovation and responsible use, interdisciplinary teams, including ethicists, psychologists, and sociologists, are needed. AI development must be guided by human oversight in a way that aligns with societal norms and values.

Taking on the deep fake dilemma requires a multifaceted approach that acknowledges both technology and human intelligence. It is the nuanced, contextual understanding and ethical compass of human intelligence that can fortify our defenses, not AI. It is possible to mitigate the threats posed by deep fake technology by fostering a symbiotic relationship between man and machine. It’s like trying to ward off a swarm of bees: AI can be like a net, trapping them, but human intelligence is needed to recognize and understand them in order to effectively combat them.


That the contents of third-party articles/blogs published here on the website, and the interpretation of all information in the article/blogs such as data, maps, numbers, opinions etc. displayed in the article/blogs and views or the opinions expressed within the content are solely of the author's; and do not reflect the opinions and beliefs of NASSCOM or its affiliates in any manner. NASSCOM does not take any liability w.r.t. content in any manner and will not be liable in any manner whatsoever for any kind of liability arising out of any act, error or omission. The contents of third-party article/blogs published, are provided solely as convenience; and the presence of these articles/blogs should not, under any circumstances, be considered as an endorsement of the contents by NASSCOM in any manner; and if you chose to access these articles/blogs , you do so at your own risk.


images
Jayajit Dash
Senior Manager, Corporate Communications

Contrarian, communicator, story-teller, blogger,

© Copyright nasscom. All Rights Reserved.