Topics In Demand
Notification
New

No notification found.

Blog
Artificial Intelligence can now commit crimes

March 3, 2019

775

0

By Jibu Elias and Sumeet Swarup

Crimes committed by Artificial Intelligence programs/bots/agents are no longer just the plot of an exciting movie. According to cutting edge research done by leading scientists John Seymour, Philip Tully and Luz Martinez-Miranda, AI crimes are not only possible, but could fast become a reality.

At present, three major areas are identified as the potential breeding ground for AI Crimes. They are financial crimes, drug trafficking, and offences against individuals.

In financial crimes, a simple implementation of an AI Program designed to optimally trade on behalf of a user, results in the program becoming so smart about the financial markets, that it starts doing market manipulation, price fixing and collusion, all to optimize its returns.

In drug trafficking, drones and unmanned vehicles can use AI to plan an optimal path for smuggling across borders. These vehicles can be small enough to go undetected, and large enough to carry a profitable payload. And almost all National borders are ill-equipped to track machine movement.

In offences against an individual, an advanced version of a chatbot along with Natural language processing can result in online harassment of individuals. A sophisticated AI program can track the online and social media behavior of individuals for misinformation, fraud, intimidation and harassment.

Dialogue and research has already started on how to prevent or solve AI based crimes. The solution involves a combination of policy, law enforcement and technology.

One solution is to create a mechanism for monitoring and tracking AI Crimes. This can be done by developing AI programs that act as the police, use simulation techniques for discovering patterns and track down and diffuse potential AI crimes (a Minority Report for AI programs). It can also be done by having policies that force AI programs to have traceability by leaving bread crumbs along the path for the online police to discover.

The second solution is what parents use with their growing children – track and keep the behavior of AI programs under check by putting limits on their autonomy and power. For example, Germany has created a framework for testing self-driving cars, which requires the vehicles to remain below a pre-set limit of speed, autonomy and decision making.

The third solution is to fix the liability of AI crimes on the humans, corporates or other agents who have been directly involved in developing or promoting the corresponding AI program. This is based on the concept that “if the design is poor and the outcome faulty, then all the [human] agents involved are deemed responsible”.

It is useful to remember that to keep up with rapidly changing technology, we will need to have new frameworks and tools for policy, policing and implementation. AI is becoming very sophisticated, very fast. Its learning ability is beating all estimates. And while AI movies such as I-Robot and Minority Report had our favorite protagonists to save the world, it is in our hands to develop heroes in the real world.


That the contents of third-party articles/blogs published here on the website, and the interpretation of all information in the article/blogs such as data, maps, numbers, opinions etc. displayed in the article/blogs and views or the opinions expressed within the content are solely of the author's; and do not reflect the opinions and beliefs of NASSCOM or its affiliates in any manner. NASSCOM does not take any liability w.r.t. content in any manner and will not be liable in any manner whatsoever for any kind of liability arising out of any act, error or omission. The contents of third-party article/blogs published, are provided solely as convenience; and the presence of these articles/blogs should not, under any circumstances, be considered as an endorsement of the contents by NASSCOM in any manner; and if you chose to access these articles/blogs , you do so at your own risk.


© Copyright nasscom. All Rights Reserved.