Topics In Demand
Notification
New

No notification found.

Responsible AI Practices are the Need of the Hour in a Hyper-Generative AI World
Responsible AI Practices are the Need of the Hour in a Hyper-Generative AI World

October 27, 2023

AI

59

0

Responsible AI Practices are the Need of the Hour in a Hyper-Generative AI World

 

Generative AI has been in existence for a while and practitioners have been experimenting with and applying them for various use cases over the years. The recent manifestation of generative AI as ChatGPT captured the imagination of a large section of society and has the potential to reimagine the way we have adopted and applied AI models so far.

GPT models are now available on Azure platforms, providing a scalable and secure ecosystem for the adoption and implementation of use cases of generative models. Google also released Bard, its response to ChatGPT, and other players are also going to release their models soon.

2023 will be a year of generative models and we can expect to see intense rivalry among technology players, each outsmarting the other. While the euphoria around this technology is high, it also poses risks. If the risks are not addressed, they can create systemic risks for practitioners and their organizations. There is a need to build guardrails that protect the creators and consumers of AI solutions driven by Generative AI models.

While regulations are around the corner, more innovations in Responsible AI practices are the need of the hour and these need to be a step ahead of innovations in Generative models.

Responsible AI Best Practices

Following Responsible AI practices can help manage risks and protect the creators and consumers of generative models.

 

Valid and Reliable

Ensure the System is Valid and Reliable

There is a need to assess the validity and reliability of generative models through continuous monitoring and regular audits to certify that the system is performing as intended. There is a need to reimagine the maker-checker function and its associated processes. While generative models would increasingly take the role of a maker, the onus of verifying the same and approving them will reside with the checker (human being).

Since generative models work on the principles of generating the most probable word after a word, the most probable sentence after a sentence, and the most probable paragraph after paragraph, they are susceptible to producing non-factual or hallucinated output as well.

Since the models have been trained using large amounts of data harvested through web scraping, there are bound to be accusations of plagiarism and copyright violations.

The recent debacle of Google Bard’s live demo during its launch reemphasizes the importance of robust QA processes. The checker would need to validate the veracity of such output and ensure compliance with processes, policies, and standards. Current risk frameworks would need to be enhanced to incorporate new working methods.

We are getting fast into a new world order where there is a need to add a disclaimer: Created by generative models; verified, edited, and approved by a human being!

 

Safety of Consumers

Ensure the Safety of Consumers

The generative AI system should not cause subliminal manipulation resulting in physical or psychological harm that endangers human life, health, property, or the environment. Special care should be taken when the consumers of such systems are children or mentally challenged or marginalized sections of society.

 

Secure and Resilient

Ensure Systems are Secure and Resilient

Like other technology systems, Generative AI systems need to ensure confidentiality, integrity, and availability of protection mechanisms that prevent unauthorized access and use. Applications like ChatGPT use principles of reinforcement learning where the bot learns based on user feedback.

We have seen examples where users can manipulate and convince the bot to accept a wrong answer through manipulative prompts. The models are susceptible to other types of adversarial attacks that can compromise the resiliency of the systems. Through simulated attacks and defense mechanisms, this can be addressed to an extent.

 

Fairness

Ensure Fairness

The large corpus of training data on which generative models are trained has inherent biases or discriminatory viewpoints on various dimensions and the models are likely to reproduce the content that reflects the same. While there is a need to weed out such data from the training corpus itself, checks and balances are required while using the model to ensure there is no bias or that bias is managed to the maximum extent possible. This needs to be a continuous process with a closed-loop feedback mechanism to constantly improve.

 

Privacy

Ensure Privacy

Generative models provide us with the opportunity to directly use them by applying zero-shot learning principles or fine-tuning them through few-shot learning. The data that would be input for such training has to be sanitized and anonymized if it contains any Personal Identifiable Information (PII).

Since past conversations are stored for future reference, such data should be encrypted and secured so that it does not fall into the hands of hackers who are constantly on the prowl to break through the systems and take advantage of them.

 

Explainable and Interpretable

Make the Models Explainable and Interpretable

Explainability refers to the representation of the logic and mechanisms in the algorithm’s operation and Interpretability refers to the inherent details of AI systems’ output in the context of its use case. Both are equally important. Generative models need to provide transparency on the underlying training data and how the output was derived, and also provide references to the output for the results it provides. They also need to provide details on relevance and confidence scores for the answers so that users can take informed decisions.

 

Accountability and Transparency

Ensure Accountability and Transparency

Transparency and accountability are fundamental to creating trust around AI systems. When users interact with generative AI bots, it is important to provide details of the bot, including its capabilities, and alert them that they are interacting with an AI system and not a human being. Generative systems could pose various types of risks to consumers, so there has to be a clear accountability structure and associated roles and responsibilities defined for monitoring and managing such risks.

 

     Author

Jayachandran Ramachandran

Jayachandran Ramachandran


That the contents of third-party articles/blogs published here on the website, and the interpretation of all information in the article/blogs such as data, maps, numbers, opinions etc. displayed in the article/blogs and views or the opinions expressed within the content are solely of the author's; and do not reflect the opinions and beliefs of NASSCOM or its affiliates in any manner. NASSCOM does not take any liability w.r.t. content in any manner and will not be liable in any manner whatsoever for any kind of liability arising out of any act, error or omission. The contents of third-party article/blogs published, are provided solely as convenience; and the presence of these articles/blogs should not, under any circumstances, be considered as an endorsement of the contents by NASSCOM in any manner; and if you chose to access these articles/blogs , you do so at your own risk.


C5i is a pure-play AI & Analytics provider that combines the power of human perspective with AI technology to deliver trustworthy intelligence. The company drives value through a comprehensive solution set, integrating multifunctional teams that have technical and business domain expertise with a robust suite of products, solutions, and accelerators tailored for various horizontal and industry-specific use cases. At the core, C5i’s focus is to deliver business impact at speed and scale by driving adoption of AI-assisted decision-making. C5i caters to some of the world’s largest enterprises, including many Fortune 500 companies. The company’s clients span Technology, Media, and Telecom (TMT), Pharma & Lifesciences, CPG, Retail, Banking, and other sectors. C5i has been recognized by leading industry analysts like Gartner and Forrester for its Analytics and AI capabilities and proprietary AI-based platforms.

© Copyright nasscom. All Rights Reserved.