by Sumeet Swarup
Policies and rules are usually planned for an area when the Government sees a current or future imbalance in an important sector, and it is felt that it is harming or going to harm certain stakeholders or unduly benefit certain others. The question of setting policies and rules for Artificial Intelligence has started coming up in formal and informal settings around the world. This was also the topic of a discussion at the World Economic Forum Annual Meeting in Davos recently. At a session held by the Editor in Chief of Wired magazine, Mr. Nicholas Thompson, the topic of “Setting Rules for the AI Race” was hotly debated by senior leaders like Kai-Fu Lee (founder of Sinovation Ventures), Amitabh Kant (CEO of NITI Aayog), Amy Webb (professor at NYU), Jim Hagemann Snabe (Chairman of Siemens) and David Siegel (founder of hedge fund Twosigma).
Some fundamental questions around this topic are – are rules really needed; will the introduction of rules be detrimental to a nascent sector; what are the areas for which rules should be set; how heavy and binding should be the rules; who should set the rules; what should be the penalty for violations.
Mr. Kai-Fu Lee and Mr. Amitabh Kant were of the view that too much regulation early on will kill the sector. And if rules were to be framed, Mr. Kai-Fu Lee said that countries should set their own rules, while Mr. Kant was more in favour of an international consortium of countries and companies. Both were, of course speaking from their vantage points of how China and India are approaching their AI strategies. Ms. Amy Webb on the other hand felt that most AI companies and entities are optimizing for the here and now, without a view on the big picture, and there is a real risk of sector imbalances growing rapidly in the future.
Additionally, Mr. Kai-Fu Lee pointed out that AI was just a software tool, and it is completely dependent on the people who run it, hence it was more important to regulate and manage the people instead. It was also mentioned that AI is applicable in various industries such as health, legal, food, retail, telecom, etc. And those industries already have their own laws and regulation, so it might be useful to have a look at those laws and see how they can be adapted to the new technology in the respective context, rather than having overarching regulations for the technology itself.
In general, it is important that any AI tool is fair and not biased against a certain population class, for example in law enforcement cases. Ms. Amy Webb brought out the point that most initial AI engines have been trained on common data sets such as Image Net and Wikipedia, which in turn have been developed by a certain section of the population, and inherently have in-built biases. So the AI engine is biased and starts making biased decisions, which is then reinforced with the data that is generated.
Another important topic that came up was how to make AI explainable. Currently, AI is seen as a black box, which churns out solutions, and nobody really knows how it did that. Which is fine if you are using Google maps, but if the AI is helping make legal or medical decisions, then the consumer will want to know how the AI made the decision. Can there be any regulatory requirements in this case? But if the technology is not up to making the AI explainable (as is the current case), then what is the recourse?
One of the discussion points that came up again and again was that AI is a centralizing force – companies that have good data will be able to develop an efficient AI tool, which in turn will generate good data, which will make the tool even better, thus making it self-perpetuating. Wealth will accrue to the few. Currently there are a few companies that are becoming very powerful. They are the G-MAFIA in the US (Google, Microsoft, Amazon, Facebook, IBM, Apple) and BAT in China (Baidu, Alibaba, Tencent). Should there be any regulations of these big companies? Should there be rules around sharing of big data, so lots of companies can benefit from it?
Other important points that came up during the discussion were – what is the future of AI and how fast is it developing; what policy framework can we have that will adapt to the rapid development of AI; how far will countries go in using AI for military purposes? Will drones be allowed to make kill decisions; Who owns the data; How do we handle privacy and data protection; Is democracy under threat given that AI is a centralizing force.
Beyond the WEF session and the interesting notes mentioned above, it is important for us in India to think about this topic carefully. AI has a lot of potential for India, and we should think about policies and rules to not just regulate the sector, but to proactively grow it – how do we channelize investments? How do we make useful applications and services from the rich data in Government servers? How do we promote a domestic industry and adoption? How do we create jobs and skills for the AI future? How do we promote using AI to solve our social problems of healthcare, education, poverty, etc. What incentives do we give to private industry to solve India level problems?
It is true that AI in India is very nascent, and early regulations tend to kill a sector. But if we have our own unique policies that are not policies for regulation, but policies for growth, then they can act as a much needed catalyst to accelerate the AI ecosystem.