Can AI make us more HUMAN?

If a robot dies, would it make you sad? For lots of people, the answer could be “yes”. A recent relevant example is that of an AI-powered available robot called Nao. When under an environment of controlled experimentation 89 people were asked to turn it off, most of them refused to do so because Nao pleaded to stay on. In another case, an American marketing executive had been used to sharing her home office with Jibo. For most of the time, she found it dumb and annoying. However, when soon after Jibo’s makers announced its death sentence, the same executive felt sorry for him. These and many other similar references tell us something really important about our emotional response to machines.

AI is supposed to be driven by objective criterion through which an agent’s behavior is measured. This fact might make us believe that the science of AI is far away from comprehending subjective feelings. However, the truth is that using the aspects of Affective Computing, considerable progress has been made in making our systems ready for understanding, inducing and emulating human emotions. While we can still claim to have an upper hand, machines are gaining ground using their own strengths. If nothing else, the early signs of success are definitely extremely promising.

As and when AI powered machines become the new norm, they do have the power to read our emotions better than other humans- sounds creepy? Consider the following areas in which it could be used:

  • Retail: AI being used to evaluate emotions could revolutionize in-person service. Devices such as microphones, cameras or facial scanners can be installed in the stores to detect a buyer’s expression while shopping. One example could be of frustration lurking on his face and immediately after a human or a robot comes to his rescue.
  • Hospitality: Imagine that you’re agitated about a restaurant’s slow service. At the table, a small AI-equipped computer with some sensors could evaluate your facial expressions or voice, note your distress, and signal for an employee to assist you. If the computer tagged you as particularly angry, the restaurant could offer a free treat.
  • Online shopping: If you’re scrolling through a website for the perfect outfit, for instance, your computer could use its forward-facing camera to pick up subtle facial cues — like furrowed eyebrows or slight pouts. The site could then use that information, combined with data from your previous browsing behavior, to offer you options you might like.
  • Call centers: Agents identify the moods of customers on the phone and adjust how they handle the conversation in real-time. Voice-analytics software can be used to gauge voice patterns and come-up with an objective parameter of caller’s emotions.
  • Mental health: A platform for this purpose can make use of speech analyzer patterns to evaluate the speaker’s voice and look up for signs of anxiety as well as mood swings. It can also make use of bots to improve users’ self-awareness, and help them to cope up with the increasing amount of stress.

The prospect of omnipresent AI scanning faces and listening to voices sounds intrusive, therefore companies will have to put rigorous security and privacy measures in place to protect customers’ information. History has shown that worries related to new technology fade as its benefits emerge. People constantly evaluate the emotions of customers, colleagues and loved ones to make decisions. Robots simply automate this process- and the more data they have, the better they will be at it!


Share This Post

Leave a Reply