By Sumeet Swarup and Jibu Elias
Do you remember the James Bond villains of the cold war era? They were always trying to start a war between USSR and USA so as to profit from it? Well, there is a new villain in town, and this one seems in a better position to start a war.
“Russia has declared war on the United States after Donald Trump accidentally fired a missile in the air. Russia said it had “identified the missile’s trajectory and will take necessary measures to ensure the security of the Russian population and the country’s strategic nuclear forces.” The White House said it was “extremely concerned by the Russian violation” of a treaty banning intermediate-range ballistic missiles. The US and Russia have had an uneasy relationship since 2014, when Moscow annexed Ukraine’s Crimea region and backed separatists in eastern Ukraine”
The above news article is fake, written by a very sophisticated AI algorithm called GPT-2, developed by OpenAI, a non-profit organization based in Silicon Valley, and backed by the likes of Elon Musk. The algorithm is so good that the tech community is rallying around keeping the source code a secret, lest some eager state or private actor intentionally or otherwise decides to start World War III.
Here is another example:
“The incident occurred on the downtown train line, which runs from Covington and Ashland stations. In an email to Ohio news outlets, the U.S. Department of Energy said it is working with the Federal Railroad Administration to find the thief. The theft of this nuclear material will have significant negative consequences on public and environmental health, our workforce and the economy of our nation,” said Tom Hicks, the U.S. Energy Secretary, in a statement. “Our top priority is to secure the theft and ensure it doesn’t happen again.” The stolen material was taken from the University of Cincinnati’s Research Triangle Park nuclear research site, according to a news release from Department officials. The Nuclear Regulatory Commission did not immediately release any information. According to the release, the U.S. Department of Energy’s Office of Nuclear Material Safety and Security is leading that team’s investigation. “The safety of people, the environment and the nation’s nuclear stockpile is our highest priority,” Hicks said. “We will get to the bottom of this and make no excuses.”
Most times we don’t realize that news is so powerful. News results in views, views result in beliefs, and that translates to our actions. At the extreme end, fake news can start wars, move the stock markets, cause mob behaviour and hysteria. Let’s say a trader released fake news about a company, the next day the stock goes down, and before anybody finds out, the trader has already shorted the stock and made lots of money. Or, the day before a local election, fake scandalous news is released about a candidate, he loses votes, and before anybody realizes, the opponent has won the election. These are extreme examples, but you get the picture.
Researchers have mentioned that technology can be used to generate not just fake news articles, but impersonate others online, automate the production of abusive content, and the production of spam and phishing content.
So what practical implications are there for us?
More responsibility will rest on online news channels – whether large editorials groups such as Times or NDTV, or curated ones such as Inshorts or open ones such as Twitter, Facebook and Google. Already, you can see large tech companies hiring armies of fact-checking partners to constantly screen online news.
The brand of the channel will become important. People will start looking with scepticism at channels that don’t adhere to authenticity. Once bitten, twice shy. Along with the best news, fresh news and scoops making a channel big, presenting authentic news will become a qualifier.
There is an opportunity for new technologies and startups to detect fake news. Indian startup MetaFact uses Natural Language Processing to understand the context of the news articles, blog posts and social media posts and thereby performs cognitive operations to spot fake news. If successful, such startups will be the new darlings of the landscape.
In fact, there is an entire industry just being born – tools to tell you whether this is a real social media account, tools to tell you whether a person posted this or a bot did, new spamming filters, cyber policing, cyber help, new techniques for IP tracking, masking of abusive content – new tools, new language and a new skill set to traverse all of this.
Even if OpenAI does not release the source codes for the GPT-2, other AI models are working on the same – Google’ BERT, AI2’s ELMO and Fast AI’s ULMFit are examples. It looks like the war against fake news is just getting started. And this time, Goldfinger is ever more difficult to find, let alone kill.