Topics In Demand
Notification
New

No notification found.

Ethical AI: Discussion Charter
Ethical AI: Discussion Charter

305

0

  1. Introduction

Any discussion around Ethical AI should necessarily start with the ‘Data Subject’ and not just the ‘Data Controller’ or the ‘Data Processor’. The European GDPR, for example, makes a clear distinction among owners of personal identifiable information, data controllers and data processors, having assigned a set of responsibilities for each category. This has an impact on the use of data for the purpose of defining AI goals. Data generated in certain sectors such as Banking, Financial Services, Insurance, Consumer Products, Healthcare, eComm etc. is easily identifiable with the individual. Therefore, guardrails (consent, privacy, use-case, storage etc.) need to be fixed before embarking on any AI project – be it from a business perspective or a social one.

Developments in the global AI space is fast paced, but as with any technology, large-scale adoption would only come through belief and trust. AI is particularly under the spotlight, as it deals with data – much of it is personally attributable. The use of this data, also known as Personal Identifiable Information (PII), has a large impact on the choices individuals and societies make.

When it comes to trust and belief, there are multiple stakeholders who are directly or indirectly impacted by AI – customers, employees, suppliers, shareholders and the society at large. To scale AI with confidence, there is a need to address the issues around trust and safety and build inclusivity and accessibility as cornerstones.

  1. How can a Technology be ‘ethical’?

A technology may not be ‘Ethical’, but it can certainly be applied ethically to solve the problems that it has been designed for. The success of Artificial Intelligence as a technology depends on the underlying datasets which are stored, labelled, annotated and analyzed. Data is used to train Machine Learning algorithms which impacts the outcome.

Ethical AI involves conceptualizing, designing, developing and scaling AI solutions that consider aspects of responsibility, accountability, safety and privacy & trust. An Ethical AI framework aims at eliminating discrimination, manipulation, exclusion and bias. Ethical use of AI helps address the need for ‘Human-centered AI’. UNESCO member states have adopted the first ever global agreement on the Ethics of AI. This has evolved into a body of recommendations on the Ethics of Artificial Intelligence. Some of the key downside risks of AI that the global community is looking to surmount are:

  • Unequal economic and social growth
  • Widening gender gap, particularly in workforce
  • Exclusion of weaker sections of the society (with disabilities, lack of economic mobility etc.)
  • Mitigating bias in Education, Justice delivery, access to social schemes
  1. The Ethical AI Discussion Charter

While there are many points of concern and debates surrounding the human use of AI and Robotics, the following list highlights the ones that make Ethical Use of AI a topic of discussion across the board.

  1. Addressing privacy concerns in a surveillance economy: The question of privacy cuts across Governments, Individuals, Non-state Actors and Businesses. While Privacy concerns are common across technologies, it is more pronounced in the case of Artificial Intelligence. Privacy concerns revolve round personally identifiable information, primarily dealing with aspects such as ‘right to be forgotten’, ‘information access & control’, ‘right to secrecy’ etc. While there are technologies to preserve privacy (masking/anonymization, access control), what is needed is a strong and demonstrable legal framework to interpret relevant laws and take suitable actions. Ethical use of Artificial Intelligence can both pre-empt attempts to create a surveillance economy as well as to empower the stakeholders.

 

  1. Removing decision system bias: Bias generally occurs when the underlying data used to train the models have inherent flaws in them, in terms of representation and how the data has been captured and labelled. Some of the critical areas where such biases are known to cloud judgments are in the areas of justice delivery, predictive policing etc. There are different bias types; Learned Bias is one where a person may not be aware of the bias that’s already there. Then there is ‘Confirmation Bias’ which makes people interpret things they feel is right or believe to be right. Thirdly, ‘Statistical Bias’ is another common phenomenon which arises due to applying the same data-set for multiple use-cases. While there are efforts to reduce AI bias, the over emphasis and trust on these systems will also need to be regulated.

 

  1. Preventing monopolistic practices to ensure fair competition: Global leaders in AI have primarily developed their products and services with a profit motive in which the consideration of ethics, oversight, governance etc. are relegated to the next levels of concern. For e.g. large technology companies, also known as Big Tech hold enormous volumes of data belonging to billions of people. Left to chance, there is a high probability of them exerting disproportionate influence on consumers, voters and Governments. There have been multiple examples of this in recent years, which has led regulators to come with policies (also known as antitrust laws) to keep malicious usage of AI in check. This space is still developing.

 

  1. Ensuring visibility into data models: An AI system will be as good as the training model that has been used to define it. If there is inherent bias in the data, or if the data is not inclusive or diverse, there could be issues around its efficacy. The lack of visibility owes it origins from opaque algorithms and black-box systems. One of the ways to pre-empt this is Documentation, which proposes the use of Data Statements as a design solution. This can go a long way to help alleviate exclusion and bias by creating equitable solutions that can be used across sections of the society.

 

  1. Making AI Accessible: AI can not only influence accessibility but can also enable it. In fact, the question of Accessible AI has legal, social (education, healthcare, disability) and economic connotations. For e.g. the AI for Accessibility program of Microsoft aims at making AI more inclusive for people with disabilities, be it in the realm of learning, or in the area of healthcare for all. To make AI accessible it is important to address ethical challenges in using AI – inclusivity, bias, privacy, error, etc. Some of the applications of Artificial Intelligence in addressing these challenges are:
    1. Solutions for the Neurodiverse
    2. AI for sign-language translation
    3. Learning solutions for people with motor, hearing or speech disabilities

 

  1. Explainable AI (XAI): Why does an AI solution need to be explainable? What are the pitfalls of not being able to disclose material aspects of an algorithm? The answer lies in making AI trustworthy when putting the models in to production. Explainable AI (also known as XAI) is a set of tools, programs, and policies which can be used to describe an AI/ML model, its reach, impact and potential biases. It involves explaining accuracy, data ethics/transparency, and outcomes in an AI-powered decision scenario. Explainable AI is one of the core pillars on which stands Ethical/Responsible AI.

There are many other wider considerations and social use-cases that act as key influencers in determining the approach to Responsible/Ethical AI; however the aspects covered in the present article highlights the most contentious and widely discussed. In our next set of articles, we shall try to decode the individual discussion points.


That the contents of third-party articles/blogs published here on the website, and the interpretation of all information in the article/blogs such as data, maps, numbers, opinions etc. displayed in the article/blogs and views or the opinions expressed within the content are solely of the author's; and do not reflect the opinions and beliefs of NASSCOM or its affiliates in any manner. NASSCOM does not take any liability w.r.t. content in any manner and will not be liable in any manner whatsoever for any kind of liability arising out of any act, error or omission. The contents of third-party article/blogs published, are provided solely as convenience; and the presence of these articles/blogs should not, under any circumstances, be considered as an endorsement of the contents by NASSCOM in any manner; and if you chose to access these articles/blogs , you do so at your own risk.


images
Bandev Ghosh
Senior Manager

© Copyright nasscom. All Rights Reserved.