Topics In Demand
Notification
New

No notification found.

Need For Greener Ways to train Machine Learning Models
Need For Greener Ways to train Machine Learning Models

May 11, 2021

AI

16

0

Training the artificial intelligence models that underpin web search engines, power smart assistants and enable driverless cars consumes megawatts of energy and generates worrying carbon dioxide emissions. But new ways of training these models are proven to be greener.

Rising Consumption of energy in training

Artificial intelligence models are used increasingly widely in today's world. Many carry out natural language processing tasks—such as language translation, predictive text and email spam filters. They are also used to empower smart assistants such as Siri and Alexa to "talk" to us, and to operate driverless cars.

But to function well, these models have to be trained on large sets of data, a process that includes carrying out many mathematical operations for every piece of data they are fed. And the data sets they are being trained on are getting ever larger: One recent natural language processing model was trained on a data set of 40 billion words.

 

Most AI models are trained on specialized hardware in large data centers. According to a recent paper in the journal Science, the total amount of energy consumed by data centers made up about 1% of global energy use over the past decade—equalling roughly 18 million US homes. And in 2019, a group of researchers at the University of Massachusetts estimated that training one large AI model used in natural language processing could generate around the same amount of CO2 emissions as five cars would generate over their total lifetime.

 

Federated Learning can help

Researchers at the University of Cambridge set out to investigate more energy-efficient approaches to training AI models. Working with collaborators at the University of Oxford, University College London, and Avignon Université, they explored the environmental impact of a different form of training—called federated learning—and discovered that it had a significantly greener impact. Instead of training the models in data centers, federated learning involves training models across a large number of individual machines. The researchers found that this can lead to lower carbon emissions than traditional learning.

Senior Lecturer Dr. Nic Lane explains how it works when the training is performed not inside large data centers but over thousands of mobile devices—such as smartphones—where the data is usually collected by the phone users themselves.

"An example of an application currently using federated learning is the next-word prediction in mobile phones," he says. "Each smartphone trains a local model to predict which word the user will type next, based on their previous text messages. Once trained, these local models are then sent to a server. There, they are aggregated into a final model that will then be sent back to all users."

"Users might not want to share the content of their texts with a third party," he explains. "In federated learning, we can keep data local and use the collective power of millions of mobile devices together to train AI models without users' raw data ever leaving the phone."

 

Training a model to classify images in a large image dataset, they found any federated learning setup in France emitted less CO2 than any centralized setup in both China and the U.S. And in training the speech recognition model, federated learning was more efficient than centralized training in any country.

Such results are further supported by an expanded set of experiments in a follow-up study ("A first look into the carbon footprint of federated learning') by the same lab that explores an even wider variety of data sets and AI models. And this research also provides the beginnings of necessary formalism and algorithmic foundation of even lower carbon emissions for federated learning in the future.

 

Sources: American Association for the Advancement of Science - Science Journal , University Of Cambridge

 


That the contents of third-party articles/blogs published here on the website, and the interpretation of all information in the article/blogs such as data, maps, numbers, opinions etc. displayed in the article/blogs and views or the opinions expressed within the content are solely of the author's; and do not reflect the opinions and beliefs of NASSCOM or its affiliates in any manner. NASSCOM does not take any liability w.r.t. content in any manner and will not be liable in any manner whatsoever for any kind of liability arising out of any act, error or omission. The contents of third-party article/blogs published, are provided solely as convenience; and the presence of these articles/blogs should not, under any circumstances, be considered as an endorsement of the contents by NASSCOM in any manner; and if you chose to access these articles/blogs , you do so at your own risk.


images
Nishant Kumar
Technology Enthusiast

© Copyright nasscom. All Rights Reserved.