Topics In Demand
Notification
New

No notification found.

Blog
Niti Aayog proposes Enforcement Mechanism for Responsible AI

November 23, 2020

430

0

Niti Aayog has released a working document titled Enforcement Mechanism Responsible #AIforAll for public comments on 18 Sep 2020.

The document recognizes that risks from AI systems vary with context and use case and suggests that a flexible risk approach be adopted to manage risks. In this context, it is important to note that the National Strategy for Artificial Intelligence published by the NITI Aayog in 2018, had proposed an oversight body. Taking this idea further, the working document recommends a Council for Ethics and Technology and delineates the role of the proposed body.

The document notes that the existing regulatory instruments are best placed to enforce rules, standards and guidelines and therefore, recommends that the oversight body may serve in advisory capacity and interface with existing regulators across sectors. It recommends that the proposed body play an enabling role in the following areas.

  1. Manage and update principles for responsible AI in India
  2. Research technical, legal, policy, societal issues of AI
  3. Provide clarity on responsible behaviour through design structures, standards, guidelines, etc
  4. Enable access to Responsible AI tools and techniques
  5. Education and Awareness on Responsible AI
  6. Coordinate with various sectoral AI regulators, identify gaps and harmonize policies across sectors
  7. Represent India (and other emerging economies) in International AI dialogue on responsible AI

The document proposes that the council be multidisciplinary in composition and highly participatory in approach.

It also suggests that for the private sector, voluntary self-regulation be a good starting point and suggests that use ethics-by-design structures (defined by standards bodies) in the organisation be encouraged. Further, the document opines that adherence may further be incentivised through a carrot-and-stick approach.

Given this proposal, it is important for the stakeholders to analyse whether such an oversight body is necessary at this stage, and how effective the body could be. Further, what additional aspects needs to be considered to make this body effective in its primary objective of enabling a responsible AI ecosystem in India, also needs to be explored. In this context, NASSCOM intends to engage with the stakeholders for their views and suggestion.

The deadline for providing feedback to Niti Aayog is 15th Dec 2020, and therefore we request members share comments/feedback to jayakumar@nasscom.in by 7th Dec 2020.

It is also important to note that earlier, in July 2020, NIT Aayog has published a working document on Responsible AI. The document analysed the impact of AI on individuals and society and arrived at the following principles that will act as guardrails to guide the growth of the sector.

  • Principle of Safety and Reliability
  • Principle of Equality
  • Principle of Inclusivity and Non-discrimination
  • Principle of Privacy and security
  • Principle of Transparency
  • Principle of Accountability
  • Principle of protection and reinforcement of positive human values

NASSCOM, based on inputs received from the members and consultation with experts, submitted its feedback to the Niti Aayog.

 


That the contents of third-party articles/blogs published here on the website, and the interpretation of all information in the article/blogs such as data, maps, numbers, opinions etc. displayed in the article/blogs and views or the opinions expressed within the content are solely of the author's; and do not reflect the opinions and beliefs of NASSCOM or its affiliates in any manner. NASSCOM does not take any liability w.r.t. content in any manner and will not be liable in any manner whatsoever for any kind of liability arising out of any act, error or omission. The contents of third-party article/blogs published, are provided solely as convenience; and the presence of these articles/blogs should not, under any circumstances, be considered as an endorsement of the contents by NASSCOM in any manner; and if you chose to access these articles/blogs , you do so at your own risk.


© Copyright nasscom. All Rights Reserved.