Topics In Demand
Notification
New

No notification found.

The Ethics of Agentic AI - Rethinking Accountability in the Age of Autonomous Decision-Makers
The Ethics of Agentic AI - Rethinking Accountability in the Age of Autonomous Decision-Makers

May 5, 2025

24

0

As someone who has operated at the helm of using technology to deliver business results, I've seen firsthand how emerging technologies can substantially transform the way we work,  deliver value and grow. But among all the innovations we’ve embraced, one stands out for its sheer potential—and its ethical complexity: agentic AI.

Unlike traditional software that follows predefined instructions, agentic AI operates with a level of autonomy. It perceives inputs, makes decisions, and takes initiative in pursuit of specific goals. In doing so, it blurs the boundaries between tool and actor. And that raises a profound question we are not ready for: When an AI agent makes a mistake or causes harm, who is accountable?

The New Complexity of Accountability

In our transformation programs, we’ve implemented automation that streamlines everything from internal support to customer interactions. But hitherto it has been based on rules and hence has a pre-defined set of outcomes that is well known and planned for. As we move toward more agentic systems—ones that can plan, adapt, and act—we are introducing an element of unknown and a new kind of complexity into our organizational structure.

Let’s consider a real possibility of an AI powered procurement assistant replacing the commonly used OCR automation, invoice processing and payment. This system responsible for the end to end process including parsing documents, matching them to purchase orders,  triggering payments autonomously, and only bringing exceptions or low confidence decisions to human review and intervention. In such a situation, a scenario of a possible multi line invoice payment where one section has previously billed items and another with new charges could be easily incorrectly treated as entirely a new document. It could hence approve a duplicate payment basis past approval patterns.

There is no malicious code, no software bug — the AI would had simply made a flawed but logical decision based on its training. The outcome? A large overpayment, several strained vendor conversations, and a hard look at the accountability frameworks surrounding our systems.

These aren’t theoretical musings. They are fast becoming operational challenges.

From Tool to Teammate

Agentic AIs are increasingly behaving like junior team members. They can draft documents, negotiate pricing strategies, or even initiate actions across systems. As tech leaders, we must ask: Do we need a new accountability framework that mirrors how we treat human agents?

Our existing policies and systems were not designed for AI that “thinks.” Unlike a script or a chatbot, an agentic AI’s behavior can evolve in real time, depending on context. That means auditability and explainability are no longer just desirable—they're non-negotiable.

Designing Ethics into Deployment

One of our core principles in transformation has been: Design for trust. The same must apply to agentic AI.

That means:

  • Building transparent logs of agent decisions and reasoning.
  • Establishing “red lines” or constraints on autonomous behavior.
  • Creating shared accountability models between IT, business units, and compliance teams.

Just as importantly, we need clarity. Employees must understand the role of AI agents in their workflows. Customers must know when they are interacting with a machine. And legal teams must know how to respond when AI-generated decisions go wrong.

A Call for Proactive Governance

Waiting for regulation to catch up would be a mistake. We, as enterprise leaders, must take the lead in shaping internal governance around agentic AI. That includes:

  • Developing AI ethics councils inside the organization.
  • Training managers to supervise AI agents like they would remote contractors.
  • Defining escalation protocols for AI-initiated actions.

The goal is not to stifle innovation, but to channel it responsibly.

Final Thoughts

As agentic AI becomes more capable and more embedded in our organizations, we’re going to have to rethink what it means to “own” a decision. The question isn’t just technical—it’s cultural, ethical, and strategic. And as leaders, we must help shape the answer.

We are standing at a crossroads where the digital agents we build today will shape the accountability systems of tomorrow. Let’s ensure they’re built not just to act—but to act responsibly


That the contents of third-party articles/blogs published here on the website, and the interpretation of all information in the article/blogs such as data, maps, numbers, opinions etc. displayed in the article/blogs and views or the opinions expressed within the content are solely of the author's; and do not reflect the opinions and beliefs of NASSCOM or its affiliates in any manner. NASSCOM does not take any liability w.r.t. content in any manner and will not be liable in any manner whatsoever for any kind of liability arising out of any act, error or omission. The contents of third-party article/blogs published, are provided solely as convenience; and the presence of these articles/blogs should not, under any circumstances, be considered as an endorsement of the contents by NASSCOM in any manner; and if you chose to access these articles/blogs , you do so at your own risk.


© Copyright nasscom. All Rights Reserved.