Topics In Demand
Notification
New

No notification found.

Navigating the AI Horizon: Regulatory Considerations for Banking in India
Navigating the AI Horizon: Regulatory Considerations for Banking in India

August 7, 2024

147

0

The banking sector in India has long been at the forefront of technological advancements, leveraging data and innovation to enhance customer experiences and streamline internal processes. In recent years, the emergence of Generative Artificial Intelligence (GenAI) has ushered in a new era of possibilities. This blog provides an overview of emerging regulatory considerations in India and globally, highlighting priority areas for banks to focus on to move closer to responsible AI use. 

With large data repositories and a relatively mature technology infrastructure, Indian banks have readily embraced AI solutions, from credit risk assessment to fraud detection algorithms. The widespread use of chatbots is indicative of the pervasiveness of AI solutions for customer service provision by banks. AI systems are also being used at the back end to streamline internal operations in banks. For example, AI systems are being used to automate compliance procedures, analyse large volumes of transaction details and issue chequebooks or credit cards. Moreover, there's a growing focus on GenAI solutions which are being explored by both public sector banks and private banks alike.  

The distinctions between AI and GenAI capabilities, akin to the transition from system V1 to V2, are signalling growing AI capabilities that will eventually increase the scope and diversity of AI based applications in banking. The pace of this AI integration is also prompting regulators to examine potential risks and review their enforcement capabilities. Amidst these considerations, there is an opportunity for banks to fortify their compliance mechanisms and ability to effectively communicate the robustness of their AI use, not only to regulators but also to customers. 

 
Global Regulatory Responses: Emerging Concerns and Guidelines 

In this context, regulators worldwide are closely tracking several emerging concerns. Among these are the possibilities of banks concealing or misrepresenting their AI usage to the public. The Monetary Authority of Singapore (MAS) has stated that banks need to proactively disclose their use of AI. The MAS also recommends implementation of customer disclosure systems in cases of AI use by banks to help build public confidence.  

While public disclosure of AI use is important, falsely claiming the use of AI technologies or exaggerating their functions and capabilities could also result in repercussions. For instance, earlier this year, the SEC chief in the USA warned against “AI Washing” in the financial industry for the purpose of attracting investors.  

There are also concerns around discriminatory or biased algorithmic outcomes, particularly in cases of automated lending and credit processes. To increase transparency and explainibility of these decisions, the Consumer Protection Financial Bureau, USA requires lenders employing AI systems to provide an “adverse action notice” to customers, containing accurate and specific reasons for credit denials.  

Earlier this year, the Reserve Bank of India (RBI) also expressed concerns about specific applications of AI in banking, such as algorithmic-based lending processes. The RBI has informally highlighted ten fundamental tenets for financial institutions adopting AI models to uphold. These include “fairness, transparency, accuracy, consistency, data privacy, explainibility, accountability, robustness, monitoring and updating, and the imperative of human oversight”. 

Hallucinations or factually incorrect AI system outputs are also a significant challenge. Additionally, there are broader concerns about privacy violations, consent and data usage, and Intellectual Property (IP) considerations around the data used to train AI models. The interconnected nature of the AI supply chain also poses concerns about proprietary data leakage and cybersecurity risks.  

 

Proactive Governance: Key Focus Areas for AI Implementation in Banks 

Boards of directors at banks can play a pivotal role in spearheading the implementation of robust risk assessment frameworks and a culture of AI governance. A proactive top-down approach to the implementation of such internal policies and frameworks should include mapping the bank's AI ecosystem, tracking existing and potential risks, and ensuring compliance with existing legal frameworks. More specifically, here are some key focus areas in the context of emerging regulatory considerations and responsible AI usage for banks: 

1. Understanding how/ why AI is used in Core Operations: Banks can gain a comprehensive understanding of how and why AI systems are utilised in their core banking operations. This includes mapping the AI ecosystem for the bank and the AI solutions used in front-end, middle, and back-end processes. The explainibility of decisions made by AI solutions in use is also important from a regulatory perspective. For instance, a RBI working group report from 2021 has stated that it is important to disclose the use of AI/ML models for decisions, the data used for building these models, the parameters used for credit evaluation, and the logic/reasons behind the decisions made by these models in the context of digital lending applications using AI.  

2. Implications of sourcing decisions with respect to AI Solutions: Banks can conduct thorough due diligence on the origins and implications for liability of the AI solutions they employ. Whether sourced externally from third-party vendors or built in-house, transparency and accountability must remain paramount.  

3. Ability to track AI Risks: Building capacity to track AI risks and adopting risk classification and management systems is essential. The use of risk identification and assessment tools at regular periodic intervals can help banks identify and track evolving risks and focus areas. Such tracking of risks is especially important in the context of the breakneck speed of GenAI solution developments. The US Financial Stability Oversight Board also emphasises the need for financial institutions to build the capacity to track and manage risks emanating from the use of AI.  

4. Orienting existing compliances to AI use cases: Banks in India can examine existing legal frameworks to ensure compliance with regulations such as the Digital Personal Data Protection Act, Fair Practice Code, Consumer Protection Act and Rules, and Guidelines on Digital Lending in the specific context of their AI usage.  

5. Implementing Customer Disclosure Systems: Additionally, banks can establish adequate and accessible grievance redressal systems allowing customers to contest the decisions made by AI systems.  

Banks can focus on practical steps such as thoroughly testing AI and GenAI solutions before deployment and establishing clear internal policies or governance structures for their AI use. It would be useful to establish user-friendly ways for customers to understand when AI is being used and how to raise concerns if needed. As stewards of responsible AI use, banks can prioritise and balance consumer interests, regulatory compliance, safety-by-design principles, and ethical considerations in their AI initiatives. Key to achieving this balance could be board-level oversight.  


That the contents of third-party articles/blogs published here on the website, and the interpretation of all information in the article/blogs such as data, maps, numbers, opinions etc. displayed in the article/blogs and views or the opinions expressed within the content are solely of the author's; and do not reflect the opinions and beliefs of NASSCOM or its affiliates in any manner. NASSCOM does not take any liability w.r.t. content in any manner and will not be liable in any manner whatsoever for any kind of liability arising out of any act, error or omission. The contents of third-party article/blogs published, are provided solely as convenience; and the presence of these articles/blogs should not, under any circumstances, be considered as an endorsement of the contents by NASSCOM in any manner; and if you chose to access these articles/blogs , you do so at your own risk.


© Copyright nasscom. All Rights Reserved.