Imagining the AI future

Less than a decade since the machine learning leap in Artificial Intelligence, AI-based systems have already upended many fields: online advertising, e-commerce, financial risk assessment, cyber-physical systems and travel to name a few, and created new winners and losers. There is an incipient AI race underway and some fear that the techno-economic competition will soon extend to weapon systems with deep consequences for the nature of warfare.

Then there is the social impact. AI brings a qualitatively different dimension to the ongoing digitalization of social and economic systems. AI systems will increasingly act like human expert systems in key socio-economic areas such as legal and penitentiary systems, taxation and banking, training and human resource management, health, mobility and crime-prevention. Even if the net impact on jobs ends up being positive, the extent of re-skilling and retooling required in many fields could be unlike any other technological disruption so far.

The fear factor is palpable as is the greed quotient. Multiple trillion dollars’ worth of additional growth could be at stake at a time of trade tensions, shifting global supply chains and slowing economies. The early disruptors in any field being transformed by AI could have a huge advantage. Quality data at scale could become the distinguishing factor in success as could be niche hardware that can run high-volume computations on the edge.

Governance itself could be transformed. The early brushes with social media mediated political campaigns in democracies such as in the US, UK, France, Brazil and India have underlined the power of predictive, probabilistic and personalized outreach. If established players in stolid analogue areas such as drilling for oil or driving people around in taxis could be disrupted by upstarts with no previous experience of that area, imagine what could be the fate of national, or even international governance platforms, if the power of digital and AI platforms could be used to reshape power equations. That this is no longer a utopian dream is clear from the ambition of platforms such as Facebook’s cryptocurrency Libra.

Is it too early to think about AI’s impact on the global order? Is it even necessary or fruitful? After all, very few speculated on the impact of electricity — cited as the best parallel for AI — on the global order and it would be impossible today to disentangle electricity’s impact on global power equations from other factors.

Or should we be looking at other parallels, nuclear power for example, and scrambling to assess AI’s potential role in reordering global equations. And could we be missing the real driver of the reordering — not AI as some kind of a boxed technology or neatly laid out technology pathway such as enrichment or spent fuel reprocessing for nuclear devices but something else, data-sets for example, or the ability to re-imagine a field using the digital opportunity?

Predicting an AI-impacted global order, which could be understood as a web of mutual expectations, norms and institutions that ties state and non-state actors across borders, could be an errand for a fool or a prophet. Over speculation is definitely not worth the while. However, prudence demands that we take some early steps to shape the change and manage the consequences. Such steps ought to include the development of shared vocabulary, common understandings, standards, benchmarking practices, protocols for interoperability and validation, measures for data security and privacy, safeguards for individual and community rights, other governance measures as well as public goods that level the playing field for researchers and companies.

Civil aviation is an interesting parallel. From the making of airliners to the booking of flights, it is fiercely competitive. Airlines can be symbols of national pride and landing rights the subject of intense negotiations. However, collaboration is still the dominant norm whether it is passenger safety and comfort, certification of pilots or efficiency and accuracy in the handling of baggage. Interoperability has been built painstakingly for mutual benefit and a distributed governance framework that includes globally negotiated norms provides clarity on roles and responsibilities.

Inclusiveness is a powerful tool at this early stage to shape the use of AI and avoid its misuse. Inclusiveness can work locally, nationally and globally to dampen AI’s potential negative impact on the global order. The report of the United Nations Secretary-General’s High-level Panel on Digital Cooperation, which is supported by evidence from the ground in Asia, Africa, Europe and America underlines the power of inclusiveness and digital public goods. Digital public goods like their analogue counterparts create wider demand and crowd in investments and innovations for digital products and services. They help bend the arc of investments towards ‘AI for Good’ and the achievement of the Sustainable Development Goals (SDGs).

To unleash the power of inclusiveness, we require both common rails and guard rails. Common rails to level the playing field for new entrants and allow innovation to scale across society; guard rails to ensure against misuse and abusive monopolies. These twin tools have to be developed collaboratively by governments, private companies, non-profits and technologists, and accordioned smartly with risk. (If in a particular area, for instance predictive analytics for crime prevention, the risks of human rights abuse escalate, we should be in a position to tighten the regulation. The regulatory accordion can be eased as responsible practices get established, risks become manageable and trust in the governance of technology builds up).

Equally, we need human and institutional capacity so that the emerging economic opportunities can work for everyone and various institutions from schools and universities to regulators and parliamentarians can work with and around AI.

Above all, we need to rapidly stand up platforms that incentivize collaboration around data and AI and turn the focus away from competitive approaches. These platforms will not take value or work away from the private sector and national research institutions. Instead they will provide the public goods that private companies and researchers need to generate AI solutions sustainably. Such public goods could include pooled data-sets, benchmarking of algorithms, interoperability standards, guidance for data security and privacy, regulatory sandboxes, computational capacity and expertise. A start could be made in an area such as health where there are lesser geopolitical sensitivities and the international learning extended gradually to other areas.


Amandeep Singh Gill, Project Lead, International Digital Health & AI Research Collaborative

The registrations for 28th NASSCOM Technology & Leadership Forum are open. Visit to see how the event is shaping up. Register Today at

We have special offers running for SMEs, Start-ups and Women in Tech, contact us at to know more.

Share This Post

Leave a Reply