Topics In Demand
Notification
New

No notification found.

Types of Chatbots: Script-Based and AI-Based
Types of Chatbots: Script-Based and AI-Based

12

0

A chatbot processes a user's question and provides an appropriate response. Chatbots operate based on pre-programmed responses, artificial intelligence, or both.

Script-based chatbots

Script-based (or rule-based, command-based, keyword-based, or transaction-based) chatbots — communicate using pre-defined responses. They always follow a script and respond based on a set of if/then rules that can vary in complexity. They do not understand the context of the conversation and only respond when the user uses a keyword or command.

When such a chatbot is asked a question like “How can I reset my password?”, it first looks for familiar keywords in the sentence. Here, those are “reset” and “password.” It then matches them with answers available in its database. But if the question is beyond the chatbot’s capabilities, such as a word with a different spelling or meaning, the chatbot may not be able to match the question with an answer. Because of this, rule-based chatbots often ask the user to rephrase their question. Some chatbots can transfer the user to a human operator if necessary.

Rule-based chatbots cannot learn from past experience. They only respond with what they know at the moment. The only way to improve such a chatbot is to include more predefined responses and improve the rule-based mechanisms. But such chatbots are cheap and simple, so they are often used.

Chatbots based on artificial intelligence

AI-powered chatbots are programs that can communicate more naturally with users. They use machine learning, natural language processing, and sentiment analysis.

Like rule-based chatbots, AI chatbots need to be well trained and equipped with pre-set responses. They are trainable, can understand multiple languages, and can detect a customer’s mood. This allows them to tailor their communication to a specific person.

AI chatbots get smarter with each conversation because they learn from users. This has become a challenge for chatbot creators and was well demonstrated in a Microsoft experiment called Conversational Understanding.

The Conversational Understanding experiment involved running an artificial intelligence called Tay on Twitter. Tay was supposed to chat with real people and show that a computer program could become smarter through “casual and light conversations.

” After talking to Twitter users for just a couple of hours, Tay began sending tweets like “Hitler was right.”

Within 24 hours, Microsoft removed it from Twitter, saying, "We deeply regret Tay's unintentionally offensive tweets, which do not reflect our opinions, what we stand for, or how we designed Tay. While we prepared for many types of failures, we made a critical oversight."

The experiment showed that AI chatbots are imperfect and have their limitations. Tay was not trained enough, she “blindly” imitated the language and behavior of Twitter users who deliberately taught her extremist statements.


That the contents of third-party articles/blogs published here on the website, and the interpretation of all information in the article/blogs such as data, maps, numbers, opinions etc. displayed in the article/blogs and views or the opinions expressed within the content are solely of the author's; and do not reflect the opinions and beliefs of NASSCOM or its affiliates in any manner. NASSCOM does not take any liability w.r.t. content in any manner and will not be liable in any manner whatsoever for any kind of liability arising out of any act, error or omission. The contents of third-party article/blogs published, are provided solely as convenience; and the presence of these articles/blogs should not, under any circumstances, be considered as an endorsement of the contents by NASSCOM in any manner; and if you chose to access these articles/blogs , you do so at your own risk.


© Copyright nasscom. All Rights Reserved.