Header Banner Header Banner
Topics In Demand
Notification
New

No notification found.

Gen AI in India: Navigating Regulation Without Overreach
Gen AI in India: Navigating Regulation Without Overreach

August 6, 2025

43

1

As generative AI (Gen AI) begins to influence everything from how we search to how we create, India faces a crucial challenge of regulating AI. How do we ensure safety, rights protection, and accountability — without stifling the very innovation that could drive inclusive growth?

A recent Approach Paper on Regulation of Gen AI in India (Approach Paper) by Trilegal adds valuable perspective to this discussion and I have attempted to capture it here as a short blog.

The approach paper proposes a regulation model that leans on existing laws, encourages self-regulation, and calls for sector-specific guidance over sweeping legislation. While this is a welcome shift), it raises important questions, particularly around liability, copyright, and implementation.

Before we decide how to regulate Gen AI, it is important to first understand the nature of the risks it presents. Not every harm introduced by this technology is entirely new; many are extensions of risks that existing laws already address. At the same time, Gen AI has also created new challenges that our current legal framework is not equipped to handle. Recognising this distinction is essential for building a regulatory approach that is both effective and proportionate.

Covered Harms vs. New Harms: A Useful Regulatory Lens

From a regulatory standpoint, Gen AI harms can be categorised into two broad categories:

  • Covered harms: Risks like impersonation, fraud, or defamation are already addressed under existing laws such as the IT Act, the Bharatiya Nyaya Sanhita, and the Consumer Protection Act. Gen AI amplifies these risks but doesn’t create them.
  • New harms: Unique to Gen AI’s capabilities — for instance, copyright infringement, easy access to harmful information.

This distinction helps policymakers focus on where regulation is truly needed, rather than reinventing legal wheels for well-covered offences. To mitigate these harms, the Approach Paper recommends a three-step strategy:

  1. First, check if existing laws already cover the harm.
  2. If laws apply but are unclear, issue guidelines or subordinate legislation.
  3. Only if both fail, consider amending the law.

Mitigating the New Harms

  1. Copyright: A Murky Middle Ground

Training Gen AI models often involves using vast amounts of publicly available content, much of which is protected by copyright. This raises a significant policy dilemma because, while such data is essential for building competitive AI models, its use can undermine the rights of original creators if left unregulated. Therefore, the question remains; how do we enable AI innovation without undermining the rights of original creators?

The approach paper acknowledges this tension and suggests a dual strategy:

  • Voluntary contractual relationship between relevant parties or revenue-sharing mechanisms to ensure creators are compensated, and
  • A legislative Text and Data Mining (TDM) exception that would allow AI models to be trained on legally accessed copyrighted content.

However, how these two proposals work together remains ambiguous. Should every AI developer enter into individual contracts with rights-holders? What will count as “legally accessed” data under a TDM exception? What mechanisms will creators have to detect and challenge unauthorised use of their work?

In the absence of regulatory clarity, both creators and developers may face uncertainty around rights and compliance.

  1. Easy Access to Dangerous Information

Gen AI doesn’t just generate content — it makes complex, harmful knowledge more accessible and understandable. Critical information that once required deep search skills is now a prompt away.

The approach paper warns against penalising access itself and instead emphasises regulating its use because only when it is used to cause harm that a crime or other violation takes place. It calls for institutions like the IndiaAI Safety Institute to detect emerging threats and collaborate globally. But operationalising this — especially in a decentralised, fast-evolving landscape — remains a work in progress.

The Liability Puzzle

Liability is perhaps the most debated aspect of Gen AI regulation — and rightly so. The debate centres on whether AI developers and deployers should be held responsible for every harmful output generated by their models. Striking the right balance between accountability and innovation remains a core challenge. The approach paper proposes a safe harbour regime: Gen AI providers and deployers should not be held liable unless they fail to act after being notified of harmful content. This aims to accommodate the non-deterministic nature of AI — i.e., the fact that developers can’t always predict how a model will behave.

But this raises legitimate concerns:

  • Does non-determinism absolve responsibility, or merely complicate enforcement?
  • Is post-facto accountability feasible when harm (e.g., disinformation or defamation) spreads instantly?
  • How will due diligence be assessed or standardised?

 

While the intent is to avoid overregulation, a more detailed framework is needed — especially for high-risk sectors or sensitive content.

Conclusion

India’s Gen AI regulatory moment is not just about managing risk — it’s about enabling innovation, safeguarding rights, and reflecting our unique developmental priorities. The approach paper offers a thoughtful starting point. But as Gen AI evolves, so must our regulatory thinking — beyond safe harbours and exceptions, towards a deeper understanding of how policy, data, and accountability intersect in the age of machine creativity.

 


That the contents of third-party articles/blogs published here on the website, and the interpretation of all information in the article/blogs such as data, maps, numbers, opinions etc. displayed in the article/blogs and views or the opinions expressed within the content are solely of the author's; and do not reflect the opinions and beliefs of NASSCOM or its affiliates in any manner. NASSCOM does not take any liability w.r.t. content in any manner and will not be liable in any manner whatsoever for any kind of liability arising out of any act, error or omission. The contents of third-party article/blogs published, are provided solely as convenience; and the presence of these articles/blogs should not, under any circumstances, be considered as an endorsement of the contents by NASSCOM in any manner; and if you chose to access these articles/blogs , you do so at your own risk.


© Copyright nasscom. All Rights Reserved.