Topics In Demand
Notification
New

No notification found.

Evolving AI Regulatory Landscape – An Overview
Evolving AI Regulatory Landscape – An Overview

May 31, 2024

79

0

Introduction

Investments in AI have risen significantly, across the globe with a staggering $83 billion invested in 2023. The global AI market is projected to move from $110-130 billion in 2023 to $320-380 billion by 2027, growing at a CAGR of 25-35% (BCG-nasscom). This happens on the back of spurred interest in AI, and its rising adoption, evolution of the AI technology and enabling ecosystem, applications around areas like generative AI, and proliferation of AI into organization-wide business processes, wherein companies are increasingly pivoting towards tech-enabled business transformation, working with a cross-functional AI strategy.

Going into 2024, the stage is set to witness further acceleration as enterprises shift their focus from generative AI experimentation to practical implementation, reconcile their AI investments, and move towards responsible AI. However, amidst all the AI hype, the reality is that not enough ground rules exist that can show a clear path and vision towards long-term AI adoption, both from a business, and risk, compliance, and regulatory standpoint. Concerns about the rapid surge in ‘synthetic’ content including misinformation, deepfakes, fake news, and other AI scams, are a cause of worry. To address potential risks, and challenges emanating from the rapid interest in AI and its adoption, technology companies, enterprises, and government must align to common best practices, in the form of regulations. Here is a quick glimpse into the current state of regulations globally.

 

Country/ Region

Approach to AI regulation

Primary regulation/ guideline/ framework

Date of introduction/ passing/ enforcement

EU

Precautionary, dedicated AI law

EU AI Act

Passed in March 2024

US

Decentralized, no unified federal regulation, specific AI regulations

The Safe, Secure, and Trustworthy Development and Use of AI - Executive Order (E.O.) 14110

Issued in Oct. 2023

UK

Context-specific, leveraging existing sectoral laws

Draft AI Bill

Introduced in Nov. 2023

China

Unified AI regulation

Draft AI Law

Issued in Jan. 2024

Interim Measures for the Management of Generative Artificial Intelligence Services

Implemented in Aug. 2023

India

“Whole-of-government approach”, leveraging different ministries

Advisories issued by MeitY

Issued in March 2024

National Strategy on Artificial Intelligence (Niti Ayog)

Published in June 2018

Figure 1 – Current state of AI regulations globally (illustrative list only)

 

The EU AI Act – paving the way for AI regulations globally.

One of the most significant developments has been the passing of the EU AI Act in March 2024. It’s the first comprehensive legal and regulatory framework globally and underscores the EU’s role at being the forefront of setting up the necessary guardrails in AI adoption while driving scale, governance, and innovation. Here are the key highlights of the act:

  • Defines a risk-based approach and classifies AI systems into categories of unacceptable risk, high risk, limited risk, and low risk.
  • Attracts penalties of as high as 7% of the total turnover of the company, or € 35 million, whichever is higher, depending on the infringement severity.
  • Ensures protection of public interest concerning health, safety, fundamental rights, and data protection, amongst other areas, applicable across sectors.
  • Developed in accordance with the Charter of Fundamental Rights of the European Union, 2000, Declaration of Digital Rights and Principles, and other European laws.
  • Applies to stakeholders and participants across the delivery value chain encompassing AI systems, applications, and hardware – developers, distributors, manufacturers, users, etc. located in or outside of the EU (provided the output of the AI system is intended to be used in the EU).

While the act sets out provisions to avoid or ban AI practices and systems that are considered unethical, and ‘unacceptable’, such as biometric categorization based on traits like race, gender, social scoring, etc., on the flip side, it does not paint an exhaustive picture of the potential use cases and scenarios that may invoke risks stemming from generative AI systems, and foundation models, considering these spaces are new and evolving. Also, it attracts high compliance costs, particularly for small and medium-scale organizations, and the start-up community, that may act as a deterrent to innovation. There are other shortfalls, that will only be addressed during the enforcement phase, when additional guidance for practical implementation of the law follows. This is expected to happen after June 2024, or later. The law got the final vote of the Council of Ministers on 21st May, following the European Parliament adopting the law back in March this year. The Act will be fully applicable by June 2026.

AI regulation in the US – incremental approach leading to complexities.

In the US, currently, there is an absence of unified AI regulation and comprehensive legislation at the federal level. However, there is focused legislation being introduced for specific use cases at both the federal and the state levels, including executive orders. Below is a non-exhaustive list of key examples:

  • National AI Initiative Act (2020) establishes a new National AI Initiative Office, whose primary objective is to serve as the focal point for federal AI activities concerning different entities whether govt. departments, agencies, or public and private organizations.
  • The AI Training Act (2022) requires the Office of Management and Budget (OMB) to establish or otherwise provide an AI training program for the acquisition workforce of executive agencies (e.g., those responsible for program management or logistics), with exceptions.
  • The Safe, Secure, and Trustworthy Development and Use of AI (Executive Order (E.O.) 14110, Oct 2023 ), establishes a government-wide and cross-department strategy and plan to drive responsible AI adoption through federal agency leadership, and engagement with partners, and directs 50+ federal agencies to engage in more than 100 actions to implement the guidance in the order, encompassing eight policy areas such as safety and security, worker support, consumer protection, privacy, etc.
  • AI Environmental Impact Act (2024) directs the National Institute of Standards, and Technology (NIST) to create standards for measuring and reporting AI’s environmental impacts.

What makes the US different is its piecemeal approach, wherein rather than establishing a single legislation to deal with everything in AI, the idea is to leverage the existing laws for regulating the use of the technology, such as the data privacy laws or the intellectual property laws. Further, it takes a non-centralized approach involving federal agencies and state legislatures to establish AI regulations. Interestingly, state governments have moved fast in introducing legislation and enacting laws for regulating AI. In 2023, more than 31 US states introduced about 191+ AI-related bills, out of which 14 became law. However, no state has passed a comprehensive AI law. All this is leading to a complex environment for the companies, especially the ones that don’t have adequate resources to deal with different laws existing in different states, and high perceived costs of compliance.

 

 

A look into the other global emerging AI regulations

Regions like the UK, Switzerland, and Australia haven’t yet come up with a unified AI regulation but focus on leveraging existing laws and making specific changes in them to accommodate AI. In the Middle East, Bahrain approved a comprehensive AI law in April 2024, while countries like Saudi Arabia, and UAE don’t have any AI-specific regulations yet, however, some guidelines exist around AI ethics and governance. China in January 2024 introduced a draft AI law that proposes to form more than 50 national and industry-wide standards for AI by 2026. Recently in May, it released new draft regulations focusing on security requirements for generative AI services.

India has over the last two years, changed its stance from taking a hands-off approach to one where it plans to come up with specific AI regulations. It is working on drafting AI-specific regulations, that would regulate high-risk AI systems and applications, and plans to include them in the forthcoming Digital India Act which is set to replace the IT Act of 2000. Also, it plans to come up with a draft regulatory framework for AI by June-July 2024.

 

What’s next?

AI technology will draw more government oversight and control amidst the concerns around copyright, synthetic content, such as deepfakes, and other AI frauds.  As companies and developers focus on the principles of Responsible AI, the role of AI regulations will become critical. Gartner predicts that by 2026, 50% of governments worldwide will enforce the use of responsible AI through regulations, policies, and the need for data privacy. Additionally, as the foundation models become increasingly complex in the future, AI regulations will help navigate the challenges of data quality and data protection. In this scenario, India’s response to drafting AI regulations would be crucial, considering the country is a significant base for tech talent. As per Stanford University, India has grown the fastest in terms of AI talent concentration between 2016 and 2023. However, it lags in private investments in AI, with only $9 billion invested during this time, compared to $335 billion in the US. Stronger regulations will help boost investments while putting necessary guardrails on.


That the contents of third-party articles/blogs published here on the website, and the interpretation of all information in the article/blogs such as data, maps, numbers, opinions etc. displayed in the article/blogs and views or the opinions expressed within the content are solely of the author's; and do not reflect the opinions and beliefs of NASSCOM or its affiliates in any manner. NASSCOM does not take any liability w.r.t. content in any manner and will not be liable in any manner whatsoever for any kind of liability arising out of any act, error or omission. The contents of third-party article/blogs published, are provided solely as convenience; and the presence of these articles/blogs should not, under any circumstances, be considered as an endorsement of the contents by NASSCOM in any manner; and if you chose to access these articles/blogs , you do so at your own risk.


Industry Analyst | Researcher | IT-Digital Services | Emerging Tech | BPO | Digital Talent

© Copyright nasscom. All Rights Reserved.