Topics In Demand
Notification
New

No notification found.

nasscom FEEDBACK on TEC DRAFT STANDARDS FOR FAIRNESS ASSESSMENT OF AI SYSTEMS
nasscom FEEDBACK on TEC DRAFT STANDARDS FOR FAIRNESS ASSESSMENT OF AI SYSTEMS

February 10, 2023

430

0

On December 29, 2022, the Telecommunication Engineering Centre (TEC) issued Standard No: TEC 57050:2022 (Draft) Fairness Assessment of Artificial Intelligence Systems (Draft Standards). Earlier last year in March 2022, TEC had initiated a public consultation to develop a framework for fairness assessment of AI/ML systems to which NASSCOM had submitted its feedback.

This is a step ahead where TEC has published the draft standards for fairness assessment in AI/ML systems.

NASSCOM in its feedback on the Draft Standards submitted that TEC has adopted a good approach, since standardisation is required – so that industry adopts certain uniform metrics around the implementation of AI systems. The feedback further clarified that since adoption of AI is sector specific and not sector agnostic, i.e., fairness and biases may differ based on the domain and the underlying use-case. Therefore, while the Draft Standards may not be applicable across sectors, it serves as a ground to initiate a joint exercise between TEC, sectoral regulators, and industry to test the Draft Standards for fairness assessment within each domain.

Our main recommendations are as follows:

   1. The Draft Standards can be simplified, like by introducing explanation of technical terms using glossary, illustrations/examples, and adequate references for technical terms.

    2. The Draft Standards should identify few sectors wise use-cases to explain the applicability of fairness metrics in the proposed framework.

   3. TEC should collaborate with the Telecom Regulatory Authority of India (TRAI) to test the efficacy of the Draft Standards in the telecommunication sector.

   4. TEC should check the Draft Standards on the government AI tools, like Digi Yatra or Faceless Assessment for income tax computation.

   5. TEC should collaborate with other sectoral regulators (SEBI, RBI) to float sandbox mechanism and encourage entities to test their AI enabled products before commercial roll outs.

   6. The Draft Standards should include post-processing bias detection (section 4.1 and 6.2.3) through explainability.

   7. The Draft Standards should avoid notes (can possibly prejudice the auditor) under the check list provided for bias risk classification (section 6.2.1)

    8. Bias testing (section 6.2.3.1) should include the step of Data Obfuscation of personal identifiable information (PII) for developing a model.

   9. The Draft Standards may consider a labeling requirement based on the AI fairness score. This may help to reduce the information asymmetry among end-users.

Please see the attachment for our detailed submission and recommendations. For more details, please write to Sudipto Banerjee at: sudipto@nasscom.in and Priyanshi Dixit at: priyanshi@nasscom.in.


That the contents of third-party articles/blogs published here on the website, and the interpretation of all information in the article/blogs such as data, maps, numbers, opinions etc. displayed in the article/blogs and views or the opinions expressed within the content are solely of the author's; and do not reflect the opinions and beliefs of NASSCOM or its affiliates in any manner. NASSCOM does not take any liability w.r.t. content in any manner and will not be liable in any manner whatsoever for any kind of liability arising out of any act, error or omission. The contents of third-party article/blogs published, are provided solely as convenience; and the presence of these articles/blogs should not, under any circumstances, be considered as an endorsement of the contents by NASSCOM in any manner; and if you chose to access these articles/blogs , you do so at your own risk.


Download Attachment

20230209_NASSCOM_Feedback_TEC_AI_Fairness_Standards.pdf

© Copyright nasscom. All Rights Reserved.