Topics In Demand
Notification
New

No notification found.

Agentic AI Systems: Opportunities, Challenges, and the Need for Robust Governance
Agentic AI Systems: Opportunities, Challenges, and the Need for Robust Governance

520

0

Agentic AI Systems: Opportunities, Challenges, and the Need for Robust Governance

 

AI agents, also known as intelligent agents, are software programs designed to perceive their environment, take actions, and achieve specific goals autonomously. They differ from traditional computer programs in their ability to learn and adapt, make decisions, interact with surroundings, and operate with limited supervision. Agentic AI systems integrate one or more AI agents that collaborate with each other and provide a unified seamless experience and outcome for the end user.

Chatbots have been in existence for a while. Are chatbots AI agents? Or do they differ from AI agents?

AI agents and chatbots share similarities in their ability to interact with users, but differ significantly in their capabilities and underlying technologies:

Chatbots AI Agents
Focus
Predefined tasks, scripted responses, and simple interactions Complex tasks, dynamic decision-making, and collaboration with other agents
Technology
Often rule-based, relying on keyword matching and pre-programmed responses Built on AI techniques like machine learning, natural language processing, natural language understanding, knowledge representation, and basic levels of causal reasoning
Capabilities
  • Follow scripts: Provide predefined responses based on user input, keywords, or decision rules
  • Limited adaptivity: Cannot learn or adapt significantly beyond their initial programming
  • Independent: Function individually and typically do not collaborate with other agents.
  • Learn and adapt: Can continuously learn from data, user interactions, and improve their performance over time
  • Reason and plan: Can understand context, reason through problems, and make informed decisions based on their goals
  • Collaborate: Can work together with other agents to achieve complex goals that require coordination and communication
Example
An example of a chatbot is that of a customer service bot that uses a predefined script, provides basic information, and directs you to appropriate resources based on predefined options. An example of an AI Agent is a highly trained personal assistant who can understand your needs, learn your preferences, and take initiative in helping you achieve your goals.

 

In essence, AI agents are like general-purpose tools that can be adapted to various tasks requiring intelligence, decision-making, and collaboration. Chatbots, in contrast, are like specialized tools that excel at specific, well-defined tasks but lack the flexibility and adaptability of AI agents.

How to build Agentic AI systems?

AI Agent Frameworks for Building Collaborative Intelligence

AI agent frameworks are a combination of libraries and workflows that facilitate the creation and management of intelligent software agents. Following are some of the latest frameworks to build AI agents:

 

AutoGen

AutoGen

AutoGen from Microsoft provides a multi-agent conversation framework as a high-level abstraction. It is an open-source library for enabling next-generation LLM applications where users can build LLM workflows with multi-agent collaborations and personalization. The agent modularity and conversation-based programming simplify development and enable reuse for developers.

Use case: An enterprise knowledge management system with a conversational interface using a knowledge base agent, retrieval agent, and dialogue agent.

 

CrewAI

CrewAI

CrewAI is an open-source framework built on top of LangChain for creating and managing collaborative AI agents. It enables developers to build cohorts of specialized AI agents that can work together to achieve complex tasks.

Use Case: A marketing team could use CrewAI to create a series of agents –  one to gather customer data from social media, another to analyze sentiment, and a third to generate targeted marketing campaigns based on the insights.

 

LangGraph

LangGraph

LangGraph is also another open-source framework built on top of LangChain. It helps represent multiple agents in a graph network and ensures seamless integration and collaboration.

Use Case: An AI Research assistant that comprises a research content web scraping agent, a processing agent that identifies relevant content default behavior and synthesizes and stores curated content, and a generating agent that crafts initial drafts of research papers based on user goals and objectives

Challenges of Adopting Agentic AI

Despite the significant advancements in Agentic AI, there are several key challenges that still need to be addressed, such as:

  • Unforeseen consequences: Agentic AI systems, due to their adaptability and ability to learn, can potentially engage in unforeseen actions or decisions, leading to unintended consequences.
  • Limited understanding of internal workings: The complex decision-making processes within these systems can be opaque. This can make it difficult to identify the root cause of errors or failures.
  • Transparency in data usage and processing: Concerns exist regarding potential misuse of user data by agentic AI systems and the need for transparent practices in data collection, storage, and utilization.
  • Unmitigated bias: Training data and algorithms can contain inherent biases that agentic AI systems may learn and perpetuate, leading to discriminatory or harmful outcomes.
  • Understanding decision-making: It’s often challenging to understand how agentic AI systems arrive at specific decisions, hindering user trust and hampering troubleshooting or improvement efforts.

Guide to Successful Development & Implementation of Agentic AI

As agentic AI systems get mainstream with their ability to accomplish complex goals, there is a need for a robust governance framework to overcome the challenges. A recent paper from OpenAI titled “Practices for Governing Agentic AI Systems” outlines some guidelines for safe and responsible development and deployment of such systems. Following are some key insights from the paper that would enable the responsible development and adoption of such systems:

  • Defining Responsibilities:
    • Clear roles and liabilities: Clearly define who is responsible for the actions of agentic AI systems throughout their lifecycle, including developers, deployers, and users. This promotes accountability and mitigates potential harm.
    • Attributability: Provide a unique identifier to AI agents so that it is possible to trace the source of error when required.
  • Ensuring Safety:
    • Robust safety measures: Implement safeguards like regular audits, human oversight for critical decisions, and clear guidelines for acceptable actions to minimize potential risks and unintended consequences.
    • Constrain the action space and seek approval: In some cases, prevent agents from taking specific actions entirely to ensure safe operation. It is prudent to have human-in-the-loop for review and approval when the cost of wrong decisions and actions can be catastrophic.
    • Timeouts: Implement mechanisms to periodically pause the agent operation and require human review and reauthorization, preventing unintended harm from continuous unsupervised operation.
    • Setting the Agent’s default behavior: Reduce the likelihood of the agentic system causing accidental harm by proactively shaping the model’s default behavior that reiterates user preferences and goals to steer toward actions that are the least disruptive ones possible, while still achieving the agent’s goal.
  • Transparency and Explainability: Ensure the reasoning and decision-making processes of agentic AI systems are clear and understandable to the extent possible. This fosters trust and allows for identification of potential biases or flaws.
  • Automatic Monitoring: Set up a Monitoring AI system that automatically reviews the primary agentic system’s reasoning and actions to check that they are in line with the user’s goals and expectations.
  • Ad hoc Interruption and Maintaining User Control: User should always be able to activate a graceful shutdown procedure for its agent at any time, both for halting a specific category of actions and for terminating the agent’s operation more generally.
  • Ethical Considerations: Ensure that the development and deployment of agentic AI systems adhere to ethical principles and societal values. This includes promoting fairness, non-discrimination, privacy, and overall human well-being.
  • Public Dialogue and Participation: Encourage open discussions and collaboration between experts, policymakers, and the public to shape responsible AI development and ensure it aligns with societal values.

It’s important to note that OpenAI’s framework is just a starting point, and ongoing research and discussion are crucial in developing comprehensive and effective governance models for agentic AI systems.

The Road Ahead

Real-world systems involve amalgamation of multiple capabilities, which warrants the design of Agentic AI systems that use multiple AI agents. We are seeing the emergence of such design patterns, given the limitations of large language models (LLMs) in producing outputs with just one API call for complex tasks. Since the quality of output of a system is multiplicative of the individual output quality from each subsystem, each subsystem would need its own output verification, validation, and feedback loop to ensure reliable and trustworthy outcomes. Governance of agentic AI is an ongoing process that requires continuous adaptation and improvement as the technology evolves.

By fostering collaboration, promoting transparency, and prioritizing ethical considerations, we can navigate the development and deployment of agentic AI responsibly and reap its benefits for the betterment of society. Agentic AI systems offer immense potential and are going to be a game changer in the coming days.

     Author

Jayachandran Ramachandran

Jayachandran
Ramachandran

That the contents of third-party articles/blogs published here on the website, and the interpretation of all information in the article/blogs such as data, maps, numbers, opinions etc. displayed in the article/blogs and views or the opinions expressed within the content are solely of the author's; and do not reflect the opinions and beliefs of NASSCOM or its affiliates in any manner. NASSCOM does not take any liability w.r.t. content in any manner and will not be liable in any manner whatsoever for any kind of liability arising out of any act, error or omission. The contents of third-party article/blogs published, are provided solely as convenience; and the presence of these articles/blogs should not, under any circumstances, be considered as an endorsement of the contents by NASSCOM in any manner; and if you chose to access these articles/blogs , you do so at your own risk.


C5i is a pure-play AI & Analytics provider that combines the power of human perspective with AI technology to deliver trustworthy intelligence. The company drives value through a comprehensive solution set, integrating multifunctional teams that have technical and business domain expertise with a robust suite of products, solutions, and accelerators tailored for various horizontal and industry-specific use cases. At the core, C5i’s focus is to deliver business impact at speed and scale by driving adoption of AI-assisted decision-making. C5i caters to some of the world’s largest enterprises, including many Fortune 500 companies. The company’s clients span Technology, Media, and Telecom (TMT), Pharma & Lifesciences, CPG, Retail, Banking, and other sectors. C5i has been recognized by leading industry analysts like Gartner and Forrester for its Analytics and AI capabilities and proprietary AI-based platforms.

© Copyright nasscom. All Rights Reserved.