Topics In Demand
Notification
New

No notification found.

The Need for Ethical and Mindful Use of Gen AI
The Need for Ethical and Mindful Use of Gen AI

March 11, 2024

AI

43

1

By: Nachiket Deshpande, Whole-Time Director & COO, LTIMindtree

“Trust is the currency of interactions,” says Rachel Botsman[i], author of Who Can You Trust? and the creator of the first course on trust in the digital world at Oxford University’s Saïd Business School. When the interactions and relationships with a business cannot be trusted, it can lose as much as 30 percent of its value[ii]. The arrival of Generative AI (Gen AI) throws a curved ball into the management of trust. Although Gen AI makes the tantalizing promise of revolutionizing business productivity, there are deep concerns around its ethical use, which can affect trust if not handled correctly. Understanding ethical issues and using that wisdom to keep the trust of customers, partners, and society is an immediate responsibility that businesses must fulfill.

Gen AI uses multi-modal large language models (MLLMs) to understand human language and for self-supervised learning. Algorithms, neural networks, and deep learning methodologies are then used over this model to generate new text, summarize documents, carry out translations, create code, conduct chats, and produce music, video, and images that did not exist before. This process is not without gray areas, bringing common ethical principles under the crosshairs of business. These ethical issues concern data provenance and bias, copyright violations, transparency, accountability, and data privacy.

There are several instances where Gen AI has proven to be problematic. In a recent incident, a lawyer in the US reportedly used ChatGPT for case research. The judge found that six cases being referred to did not exist and had bogus judicial citations. The lawyer was unaware that the contents created by ChatGPT could be inaccurate or even false[iii]. This is not an isolated incident. Parents, teachers, and administrators are worried that children will use Gen AI applications to create their college assignments and pass them off as their own. Truthfulness and accuracy are at stake.

Another clear problem is associated with the data being used to train LLMs. The data could have originated anywhere (mostly the internet), and using it without permission can result in copyright violations. This September, 17 authors and the Authors Guild in the US sued OpenAI for copyright infringement, claiming that OpenAI used the authors’ work to train its AI tools without permission[iv]. Designer and educator Steven Zapata, who is fighting to protect artists’ rights, says, “The performance of the model would not be possible without all of the data fed into it – much of it copyrighted.”[v]

For now, the most widely felt ethical problems revolve around using biased or false information as input, data laundering or using someone else’s data to manufacture your content to run your systems and applications and passing off Gen AI content as your own.

To unravel how businesses were approaching Gen AI and the challenges around its use, LTIMindtree surveyed 450 early adopters of the technology across the US, Europe, and the Nordics. Called The State of Generative AI Adoption, the study found that leaders were focusing on developing “mindful” AI. As many as 60 percent of the organizations that had extensively adopted Gen AI across multiple functions, or the entire organization said they regularly monitored and evaluated AI systems for potential biases and took corrective action where needed. Those with moderate adoption of the technology (67 percent) were also doing likewise. Across the surveyed group, 79 percent regularly audited their usage of Gen AI. These leaders and early adopters tell us a story: If standards of safety, reliability, security, and ethics are not maintained, there will be trouble ahead; there will be a loss of trust. Besides brand erosion, legal penalties may be expensive.

Using generative AI to advance business and the cost of creating ethical policies go hand in hand. Organizations that solve ethical challenges will move ahead with confidence. They will engage AI practitioners and researchers to help build models that filter misinformation; they will use ethically sourced and non-biased data for training their models; the usage of these models will be controlled; regulatory bodies will be welcomed to examine their systems; and these organizations will remain transparent to their customers, clearly providing indicators of where Gen AI is used in their processes.

Businesses must create strategies – backed by talent – to build trust models when using Gen AI. They must have safeguards for the use of data and their self-learning algorithms. They must create processes that identify and stop the use of misinformation. They must proactively inform customers and users of flaws and breaches that endanger their privacy or safety.

Organizations will do well to consider the early creation of a body such as the Department of Digital Trust with a full-time Digital Ethics Officer at its head. Deploying Gen AI cannot be considered successful until a structured approach to ethics is in place.

Our The State of Generative AI Adoption report distills the strategies of 450 leading decision-makers around Gen AI. It looks at who is adopting the technology, why it is being adopted, and the best ways to guarantee successful adoption.


That the contents of third-party articles/blogs published here on the website, and the interpretation of all information in the article/blogs such as data, maps, numbers, opinions etc. displayed in the article/blogs and views or the opinions expressed within the content are solely of the author's; and do not reflect the opinions and beliefs of NASSCOM or its affiliates in any manner. NASSCOM does not take any liability w.r.t. content in any manner and will not be liable in any manner whatsoever for any kind of liability arising out of any act, error or omission. The contents of third-party article/blogs published, are provided solely as convenience; and the presence of these articles/blogs should not, under any circumstances, be considered as an endorsement of the contents by NASSCOM in any manner; and if you chose to access these articles/blogs , you do so at your own risk.


© Copyright nasscom. All Rights Reserved.