Topics In Demand
Notification
New

No notification found.

The Need for Responsible AI
The Need for Responsible AI

May 6, 2023

524

0

ChatGPT is the iPhone moment for AI," famously said Jensen Huang, Co-founder, and CEO of Nvidia Corporation. At the time of its launch on November 30, 2022, little was known about the impact it would have on the world. However, now it has become so popular that even school going children know about ChatGPT.

In fact, a traditional savory snack shop in Bengaluru, India, known by name "Chaat Corner," even renamed itself "ChatGPT Corner" and saw a 10X growth in business due to the name's popularity, despite not using ChatGPT technology.

AI has been in the industry for a very long time, and nothing has been as fascinating as what ChatGPT, a generative AI, has done in the last 5 months. There is a fear among professionals in all fields, from healthcare and finance to education and entertainment, that ChatGPT and other AI systems like DALL-E that can create realistic images and art from a description in natural language can take over their jobs and perform better, faster, and more effectively at a lower cost.

If you have watched this 2013 movie “Her”, here is a scene between Joaquin Phoenix and OS1, it looked so unrealistic back then is not so unrealistic now. However, the increasing use of AI also raises concerns about its potential risks and ethical implications. As AI becomes more prevalent in society, the need for responsible AI has become more pressing than ever before.

The Risks of Irresponsible AI

  • Lack of Trust

The lack of trust in AI arises from concerns regarding its reliability, accountability, potential biases, and errors, which can be linked to the authenticity of the data it was trained on, the complexity of many AI algorithms and systems, potential negative consequences, and a general lack of understanding about how AI operates.

 

  • Biased Decision Making

One of the major risks of not implementing a responsible AI is the potential for biased decision-making. AI systems are only as unbiased as the data they are trained on. If the data contains biases, these biases can be amplified by AI systems, resulting in unfair and discriminatory decisions.

 

For instance, a facial recognition algorithm that is trained primarily on data from white males may not be as accurate in recognizing faces of people from other ethnicities and genders. This can result in wrongful identification and discrimination against marginalized groups.

A recent incident where AI-based facial recognition software completely went wrong resulted in police arresting an innocent person named Alonzo Sawyer from Maryland,

 

US. Alonzo Sawyer was put behind bars for a week and later released when the police found out that it was a software glitch.

 

  • Who owns the copyright to AI creations?

Whether it is a new text content, a picture, a song, a video? These AI systems can produce copycats of human works that dilute the market, and they use content writers, artists production, without their permission, as training date.

In January 2023, as per a report published here, Getty Images – one of the world’s largest image libraries sued Stability AI, a company that develops images, videos, audio using AI, for infringed intellectual property rights including copyright in content owned or represented by Getty Images.

Today, only a human’s work can be copyrighted. With AI becoming more prominent the

copyright laws need to be amended to put effective rules for the work generated by AI.

  • Safety and Security Concerns
    • Chinese scientists carried out a rule breaking AI experiment in space where they gave the technology full control of a satellite and set it free for 24 hours, the AI then picked up few places and ordered the orbiter to take a closer look.
    • True Anomaly, an American based company, is using AI to train space warfighters with spy satellites.
    • Israel military builds up AI battlefield tech to hunt Hamas terrorists, protect against Iran threat.

Granting AI access to operate and make decisions about the path of satellites, or using AI as military technology on the battlefield, is very disturbing and concerning. These actions have the potential to cause serious international escalations and conflicts, which could result in dire consequences.

  • Fake news generated by AI

In recent years, the issue of fake news has become increasingly widespread, and the proliferation of generative AI has only made the situation worse. Here are a few recent instances of AI-generated fake videos and pictures.

concern among listeners.

    • A Twitter user recently shared photos of ‘Donald Trump getting arrested’. The hyper-realistic eerie images or ‘deep fakes’ quickly went viral, creating shock and delight.
    • Fake pictures created by generative AI of Barack Obama and Angela Merkel enjoying in a beach went viral.

 

  • Data Privacy Concerns

Data Privacy is another biggest concern in AI. In AI development, the dominant paradigm is that the more training data there is, the better the model will perform. For instance, OpenAI's ChatGPT 3.0 was trained on 570 GB of data collected from the internet.

However, the pursuit of larger models is now causing problems for the company.

Over the past few weeks, several Western data protection authorities have launched investigations into how OpenAI collects and processes the data that powers ChatGPT. They suspect that OpenAI has scraped people's personal data, including names and email addresses, and used it without their consent.

Italy became the first Western country to ban ChatGPT. The Italian Data Protection Watchdog ordered Open AI to temporarily cease processing Italian users’ data amid a probe into a suspected breach of Europe’s strict privacy regulations. However, very

recently lifted the ban after OpenAI implemented the measures to comply with the Italy’s

privacy requirements

 

Is AI terrifying?

Not everything about AI is negative. While AI does have the potential to be misused, it also has tremendous potential for good. It is encouraging to see some of the recent inventions that utilize AI, listed below.

  • Kerala girl aged 11, develops AI app to detect eye diseases with nearly 70% accuracy. This is a great example of how AI can be used to promote health equity.
  • Indian judge seeks ChatGPT opinion for views on bail plea of murder accused. Harvey, a San Francisco, US based startup, developed AI powered Digital assistant for lawyers.. Hiring a legal advisor is expensive, in future, we can expect and affordable AI legal advisor.
  • FarmWise Labs, Santa Clara US based company, developed AI-enabled weeders that

reduces the need for manual cultivation. The company’s fleet of tractors will soon be for

sale.

The Importance of Responsible AI

Responsible AI is necessary to ensure that AI benefits society without causing harm. Responsible AI refers to the design, development, and deployment of AI systems that are transparent,

 

accountable, and ethical. It involves the use of data ethics to ensure that AI systems are fair and unbiased, and that they protect the privacy and security of individuals.

Responsible AI is not just a matter of ethics, but also of legal compliance. As AI becomes more integrated into various industries, there is a need for regulations and guidelines to ensure that AI systems are safe and reliable. Responsible AI also promotes innovation by building trust and confidence in AI systems, encouraging investment and adoption.

Implementing Responsible AI

Implementing responsible AI requires interdisciplinary collaboration between experts in various fields, such as computer science, data science, ethics, law, and policy. It involves developing ethical guidelines and standards for AI development, testing, and deployment. It also involves the use of data governance to ensure that AI systems are trained on diverse and unbiased data, and that they protect the privacy and security of individuals.

To implement responsible AI, there is a need for transparency and accountability in AI systems.

This includes the ability to audit and explain the decisions made by AI systems, and to provide a feedback mechanism for individuals affected by AI systems. It also involves the development of algorithms that are robust, secure, and explainable, so that they can be trusted by individuals and organizations.

Some of the guidelines of implementing responsible AI systems are.

  • Trustworthy
  • Avoiding and Removing Bias
  • Be transparent and explainable.
  • Benefit Society
  • Enforce the highest standard for privacy.
  • Be human centered.
  • Comply with data governance standards like GDPR.

Are there any efforts underway to implement responsible AI?

  • Have you read the open letter asking all AI labs to immediately pause for at least 6 months? In March more than 1,100 signatories, including Elon Musk, Steve Wozniak, and Tristan Harris of the Center for Humane Technology, have signed an open letter that was posted online. The signatory list increased to more than 25K since then. It may not be wise to stop AI Innovation until a regulation is in place,
  • The European Union is considering far-reaching legislation on artificial intelligence. The proposed Artificial Intelligence Act would classify AI systems by risk and mandate various requirements for their development and use. European lawmakers are still debating the details, with many stressing the need to foster both AI innovation and protect the public.

 

  • US President meets Tech giants to discuss on AI dangers

.

While there are some ongoing efforts to implement responsible AI, these efforts are currently happening in isolated pockets. In order to truly achieve responsible AI, it's crucial that a unified effort is made across different sectors and industries.

Conclusion

Responsible AI is crucial due to its potential benefits but also significant risks and ethical challenges. It must ensure trust, fairness, accountability, privacy, security, and prevent fake news. Achieving this requires interdisciplinary collaboration and ethical guidelines for development, testing, and deployment. Similar to the Paris Agreement on Climate Change, all countries in the world must recognize the need for responsible AI and comply with guidelines to ensure AI's positive impact on society.


That the contents of third-party articles/blogs published here on the website, and the interpretation of all information in the article/blogs such as data, maps, numbers, opinions etc. displayed in the article/blogs and views or the opinions expressed within the content are solely of the author's; and do not reflect the opinions and beliefs of NASSCOM or its affiliates in any manner. NASSCOM does not take any liability w.r.t. content in any manner and will not be liable in any manner whatsoever for any kind of liability arising out of any act, error or omission. The contents of third-party article/blogs published, are provided solely as convenience; and the presence of these articles/blogs should not, under any circumstances, be considered as an endorsement of the contents by NASSCOM in any manner; and if you chose to access these articles/blogs , you do so at your own risk.


Download Attachment

The Need for Responsible AI.pdf

images
Venkat Kandhari
Industry Principal

Mr. Venkat is an Industry Principal working with Infosys Limited. A thought leader in Unified Communications field with 24 years of industry experience in Unified Communications Research and Product Development and a proven track record in building technology teams who partner with business leaders in meeting strategic goals. Venkat’s professional expertise includes UC Linux platform and UC product security. He holds Master of Computer Application (MCA) from Osmania University Hyderabad, India

© Copyright nasscom. All Rights Reserved.