Skip navigation
All Places > Discuss > IOT/AI > Blog
1 2 3 Previous Next

IOT/AI

105 posts

According to the US Department of Energy, pumping systems account for almost 20 percent of the world’s energy consumed by electric motors and approximately 25 to 50 percent of the total electric energy usage in certain industrial facilities.  Among other applications, pumps move crude oil through vast pipeline networks, which in turn play an indispensable role in transporting hydrocarbons to key markets. Suppliers are already developing solutions such as digital twins that can leverage data analytics to optimize the performance of pump stations for crude oil and other pipelines.

Digital Twins and Oil & Gas 4.0 - Key Benefits

As digital replica modeling tools, digital twins support a digital culture of "fail fast, learn quick" by providing the perfect testing ground for innovative new ways of working.  Also, connectivity to the components and equipment provides real-time monitoring and adjustment capabilities.  The original concept came from the desire to take all information available on a piece of equipment or asset and then applying higher level analysis to that information.

digital twins and oil & gas - key benefitsthen applying higher level analysis to that information.
"Digital Twin of a Petroleum Refinery (Source: GE Ventures)"

 

Digital twins can help oil & gas companies:

  • detect early signs of equipment failure or degradation to move from reacting and responding to a failure to being proactive; which enables owner-operators to plan and implement corrective maintenance actions before failure occurs and often at much lower cost
  • model drilling and extractions to determine whether virtual equipment designs are feasible
  • gather real-time data feeds from sensors in an operational asset to know the exact state and condition, no matter where it is located

The real advantage of the digital twin concept, however, materializes when all aspects of the asset (from design to real-time operating and status data) are brought together to optimize the asset over its lifetime.  Companies can test pricing levels, logistics challenges, even potential safety hazards. A digital twin allows users to identify numerous plausible futures for an asset and consider their potential impact.

Best Practices of Oil & Gas 4.0

Recent research indicates that many of the oil and gas organizations implementing the Internet of Things (IoT) are already using or plan to use digital twins in 2018.  In addition, the number of participating organizations using digital twins will triple by 2022.  Several best practices in this area are emerging among the major engineering and oil & gas firms:

  • it's best to involve the entire value chain
  • establish well-documented practices for constructing and modifying digital twins
  • include data from the multiple sources (as-builts, operational data, costs, maintenance program, engineering detail, physical constraints, behavioral patterns, operating parameters, customer demands, and weather patterns)
  • look beyond the normal software development cycles to consider asset lifecycle issues

Initiatives Already Underway

Due to its asset-intensive nature and reliance on large pieces of highly instrumented equipment, often operating in remote, unsafe, and uncompromising locations, the oil & gas industry has had digital twins on its agenda for several years. 

Shell, together with Swiss engineering modeling and simulation technology company, Akselos and engineering research and development experts at LICengineering, a Danish consultancy firm specializing in the marine and offshore energy sectors, have recently signed up as participants in a two-year digital twin initiative.  The partnership focuses on advancing the structural integrity management of offshore assets by combining fully detailed cyber-twin simulation models.  Things are well under way with Shell North Sea assets, with the intention to improve management of their offshore assets, improve worker safety, and explore predictive maintenance.  There are two phases to this initial project:

  • First, to develop a condition-based model of its selected assets, enabling the company to analyze structural integrity with more accuracy and detail
  • Second, to combine this model with sensor data, to allow Shell to monitor the health of its asset in real time, which would enable the company’s operators to predict the future condition

The world's first “digital rig” is targeted to achieve a 20 percent reduction in operational expenditures across the targeted equipment and improve drilling efficiency.  The solution connects to all targeted control systems, including the drilling control network, the power management system and the dynamic positioning system.  Data is collected through individual IoT sensors and control systems, modeled and then centralized on the vessel before transmitting in near real time to GE’s Industrial Performance & Reliability Center for predictive analytics.  The system has already started to capture multiple anomalies and produce alerts of potential failures up to two months before they would occur.  The data models come from a digital twin of various physical assets, along with advanced analytics to detect behavioral deviation. Thanks to vessel-wide intelligence, personnel both on the vessel or onshore can gain a holistic view of an entire vessel’s health state and the real-time performance of each piece of equipment onboard.

In 2017, BP invested in the Beyond Limits start up to build upon existing NASA- and DOD-based experience in robotics. The intent was to operationalize new insights from operations to help them locate and develop reservoirs, enhance production/refining of crude oil, and increase process automation and efficiencies.  Extensive infrastructure was established, including supercomputers, and 2,000 km of fiberoptic cable and large investments were made to increase in data storage to six petabytes.  As a result, IoT sensors are collecting data about temperature, chemicals, vibration and more from oil and gas wells, rigs, and facilities.

The Gazprom subsidiary, GAZPROMNEFT-KHANTOS, has established an Upstream Control Centre that's pulled together already established solutions.  The objective is to improve upstream process efficiencies from a central operating center.  One of the most important components has been to establish the digital twin, developed for mechanical fluid-lifting built around hybrid models. This is further enhanced with machine learning tools and the ability to self-calibrate based on rapidly changing information, sourced from automated controls.  Information collated by the digital twin, new maintenance solutions, and other Gazpromneft-Khantos systems are accumulated at the Control Centre and can be displayed and visualized by multifunctional teams to take timely and well-informed decisions.  The functionality of the Gazpromneft-Khantos Upstream Control Centre will be significantly expanded in the future.  Currently, the company is completing testing of digital twins for formation pressure maintenance systems, energy supply systems, and treating and utilizing associated petroleum gas.

The Challenges Ahead

Development of the fourth Industrial Revolution defining technology is not for the casual "toe-dipper," as the journey to true digitalization is challenging for any enterprise.  Each aggregated digital twin is unique, ultimately enabling powerful data analytics, new machine learning, and potentially valuable information across the OEM network.  

Establishing digital twins requires a focused and cross-functional team that spans the organization, incorporating technical expertise across the infrastructure, the enterprise IT and OT applications from the OEM to the fully constructed and operational asset.

 

Reprinted with permission, original blog was posted here”. You may also visit here for more such insights on the digital transformation of industries.

 About ARC Advisory Group (www.arcweb.com): Founded in 1986, ARC Advisory Group is a Boston based leading technology research and advisory firm for industry and infrastructure.

 For further information or to provide feedback on this article, please contact akanagali@arcweb.com

 

About the Author:

 

Jyoti Prakash

Analyst

Jyoti’s research primarily focuses on oil & gas industry analysis, upstream oil & gas automation, and digital oilfield technologies.

 

 

The first industrial revolution marked the transition from traditional hand production to machine-driven production. The second one was characterised by rapid industrialization driven by technological innovations of the day, moving from small production units to massive factories where hundreds of people worked at any given point of time. The third industrial revolution could be credited to the World Wars – perhaps the only good to come out of the carnage and destruction. This IR saw the introduction of computers and robots to aid production, resulting in greater efficiency, throughput and quality.

We are now in the fourth revolution, which is commonly known as Industry 4.0.

Industry 4.0 has been powered by a significant number of factors. All transitions have been driven by economic considerations, but 4.0’s turn cannot be explained so simply. In the past few years, technology has evolved both in power and application; businesses are now even more complicated entities, involving a range of stakeholders right from the governments of the states they operate in to local communities who now have an active say in how their resources are consumed; customers, both corporate and individual, are more demanding. This is no longer the world in which “any color, as long as it is black,” is an acceptable business policy.

In an old world once, technology meant tethering beasts to flywheels to grind flour. Now it means everything – from the solar panels that power the motor, to the sensors that determine how finely the flour has been ground, to the robots that pack it for dispatch, to the climate-controlled trucks used for transport, to electronic billing and delivery, to alerts over mobile phones and delivery with drones. Technology itself is a broad umbrella under which other heads can be grouped under.

 

Connectivity

 

It is impossible for us - even those who grew up with rotary telephones and fat picture-tube television sets – to imagine a world without smartphones, let alone mobile phones. Statista reports that the number of smartphone users in the world is likely to hit 5 Billion by 2019. That’s more than 60% of the population in a world, up from single-digit figures barely half a decade ago.

It is no wonder then that advertising has moved from visual media – such as televisions and newspapers – to electronic media such as might be consumed on handheld devices. Internet costs have dropped across the world while infrastructure for access has scaled up simultaneously, which has also had a chicken-and-egg effect on smartphone usage. In emerging markets like India and China, more people consume data on their smartphones than on computers.

Smartphones have also opened up an entirely new ecosystem for businesses to reach out to and stay in touch with their customers. In an interesting statistic, there are as many billion-dollar companies started in the past fifteen years as were started in the sixty before it. Almost all of the former have relied on technology to drive their business models.

But the journey, as far as we can see, has barely begun. Smartphones have apps and apps are a surer way for companies to communicate back and forth with their customers. They offer fewer distractions, better engagements and mine more powerful usage data than any other medium can. As we move from 4G to 5G, even bandwidth will cease to be a constraint when it comes to connectivity.

Industry 4.0 will continue to be driven by connectivity solutions that don’t stop at smartphones. Sensors too have become more accurate and efficient. They consume less power, can detect greater ranges and can often be connected wirelessly to other devices to form a network that can monitor everything from one end of the business to the other. The air around is buzzing with a gazillion zeros and ones every second, data streams that connect hundreds of devices and control units.

Internet of Things, which is the more popular term for such an arrangement, has quickly moved from being the stuff of sci-fi (or, at the very least, top-secret) to a tangible technology we can see around us. Companies are no longer asking for proofs-of-concept, because there is no longer a need to prove that concept. Instead, they are asking us how we can help them add instrumentation and automation to their work – whether it’s production or service delivery. A project we did recently was for a company that automated legacy plants. We worked on the software that would monitor the feeds from the sensors, escalate appropriately when faults or warnings occur and deliver detailed reports on how the systems were performing.

IoT is proving particularly useful in home automation and logistics. In home automation, it is used to make living conditions more efficient – climate control, turning off fans and lights when rooms are vacated, running robotic vacuum cleaners, monitoring air quality and such. Logistics employs IoT to keep track of movement of people, product and vehicles.

Cloud services, in a way, are driving these innovations. Maintaining exclusive servers is still an expensive proposition, and the need for redundancies to fall back on makes it even more so. As a best practice, companies must opt for greater server performance than they need so that their stacks do not fail even at a time of high demand – yet, that merely means that for more than 95% of their running times, server stacks are rarely utilized in an optimal manner. That’s where cloud computing changed the rules of the game.

The cloud – a network of servers, some of them performing specific purposes – promises scalability, usage optimisation, failsafes, flexibility and near-perfect uptimes. This is often delivered with instantiation and load balancing algorithms that spread the traffic to the application so that there is very little chance of a system crash or a DDoS event. Even if one or two nodes of the cloud were to go down, neither data nor operational connectivity will be lost.

 

Data and Intelligence

 

Big data, artificial intelligence, machine learning and blockchain constitute three technologies that affect business intelligence. Companies are now able to connect data from diverse sources and make sense out of them, helping them identify correlations and behaviour patterns that might otherwise be missed. Big data – massive volumes of data – is used to identify and/or prevent fraud, price discovery, customer analysis, operational benchmarking, etc.

The first phase of big data left a lot to manual analysis, in that humans still had to make sense of the numbers and take appropriate decisions. The current and future phases, however, are augmented by machine learning. Self-learning programs or bots will be assigned the task of collating new pieces of information as they enter the system, figuring out the patterns for analysis, executing the analysis and eventually putting across an actionable summary for the managers.

But machine learning itself is not going to be restricted to a sidekick’s role for big data. ML is driving decisions in sectors such as pharma and medicine,  agriculture, transportation, hospitality and education. In his book, Kranti Nation: India and the Fourth Industrial Revolution, Pranjal Sharma talks in detail about how ML is changing the landscape even on governance. For instance, a Microsoft Azure project has been employed to track the successes and consequences of a health programme in south India, with a particular brief to understand how relapses offer. It is in fact a must-read for anyone looking to understand in more detail how industry in India is gearing up for modern-day tools and challenges.

Artificial Intelligence and Machine Learning are often mistaken for each other, and indeed, there are enough reasons to suggest why this shouldn’t be a big deal. At the same time, as a technocrat myself, it would be an unforgivable sin to take that path of convenience. AI and ML aren’t the same. In a manner of speaking, ML is a subset of AI.

Artificial intelligence refers to systems that can think, adapt and decide after taking in various factors, much as we would, even in an unstructured or fuzzy context. Machine Learning, on the other hand, refers to systems that are purposefully built to learn learning by themselves, often within a broad context or situation. I know, I know, it takes a while to get it… but it is an important distinction nonetheless.

AI, therefore, is implied wherever ML is employed. In addition to this, AI finds uses in systems where there are definite boundaries to what the system must do. For instance, in hospitality, AI is used by service providers to forecast demand based on key parameters, extrapolate it based on new conditions (such as the visit of a celebrity during a festival), determine staffing and material requirements, identify competitive price points and manage dynamic pricing, earmark high-value properties for last-minute guests (who won’t mind paying a premium if there is a shortage of rooms) and even put together additional packages of add-ons and third-party services to create a better experience for the guests.

 

Visualization

 

3D, Augmented reality, virtual reality and mixed realities are no longer the tools of the elite they once were, at a time when the hardware requirements for running such solutions were prohibitively expensive. Now even a reasonably-capable smartphone can run AR/VR/3D applications.

Like AI and ML, AR and VR often end up getting clubbed together. Unlike AI and ML, however, AR and VR are not subset-superset. AR essentially refers to adding digital objects, such as images, text boxes, interactive buttons, videos, etc. to a real world context. In VR, that real-world context itself is not there. VR creates a completely virtual world, one in which you can look in any direction and find yourself within. The applications they can be used for, therefore, are also different.

AR is the recommended solution when context-specific overlays are needed. For instance, a brochure can be scanned and a 3D model can be superimposed on the images (known as markers in AR) so that the viewer gets a complete 360-degree look at the product. We’ve used this for our real estate, automotive and industrial machinery clients. In the US, AR is already playing a key role in medical training and diagnostics. As an educational tool, few can match the experiential value of AR.

VR, on the other hand, can be used in situations where the viewer might need to fully immerse themselves in an experience. For our real estate clients, we have created virtual apartments that visitors can navigate – either with a joystick or by moving around an area (clear of obstacles). VR would be great for teaching students about, say, the Jurassic era, for helping with phobia therapy, for experimenting with the look and feel of a room, for experimentation that would otherwise be impossible in the real world and as a marketing gimmick. Kids and adults alike love transporting themselves.

Mixed reality is, as the name suggests, a mix of real and virtual worlds. I’m going to let a video describe this one.

A common limitation to all three is that they are very personal technologies. It is only the viewer who has control, and everything is viewed from his/her perspective. Even if you were to display what the viewer is seeing on a big screen, others will see only what the primary viewer – the one wearing the headset and/or holding the device – is seeing.

There is an alternative, though.

Holograms, once the kind of technology Star Trek (and Total Recall?) fans drooled over, is making a comeback of sorts. Until a few years ago, holograms had found only limited popularity. More of a gimmick than a true solution, they weren’t interactive and required extensive, expensive hardware setups. That’s no longer the case now. Holograms can be made interactive, and, just as importantly, they can be run on kiosks.

Irrespective of the mode chosen, the success of visualization technology eventually hinges on how good the 3D modelling is. And that brings us to a heading which really doesn’t fit in with the three already listed.

Additive Manufacturing / 3D Printing

3D printing, experts argue, could eventually lead to the re-emergence of cottage industries. While 3D printing cannot be executed on the scale of hundreds of thousands of products a month, such as would happen in an assembly-line factory, it can still increase in multiples the normal throughput for small and cottage enterprises, letting them use their economies of scale to become profitable at their levels.

3D printing allows rapid prototyping and additive manufacturing.

 

Industry 4.1

 

At the risk of putting forth an unusual term, we have already moved beyond Industry 4.0 and into 4.1, which is where the convergence of these technologies are taking place. AI, for instance, is being used in AR applications to give you real-time, real-world data (such as scanning a car on the street, which pops up an electronic brochure and a choice of looks, all of which can be applied on the car then and there, virtually of course. Want to see how a silver Jag might look instead of the black one in front of you?)

Enterprise-level applications, such as Robotic Process Automation (RPA) systems, combine multiple technologies. RPAs themselves include machine learning, character recognition, IoT, mobility, et al. and are expected to disrupt the sectors they are being introduced in. Many corporations have already invested in automations across their entire business networks; others are commissioning feasibility studies for integrations that will be high on flexibility and low on maintenance in the coming years.

Indeed, one might even say that the revolution is over, done and dusted, and what we are seeing now is the new order of things. One where innovative change is the only constant, where every incremental percent of operational efficiency must be grabbed, where investments must be made for today and tomorrow.

 

Over the next few weeks, we will be looking at each of these technologies in detail. Stay tuned.

Process engineering simulation software has transformed the way process engineers do their job and is essential to maximize the return on capital investment in your plant or another industrial facility. The proven capabilities and security of the cloud can now transform the way manufacturing owner-operators and Engineering (EPC, EPCM) deploy simulation software.

 

Please join me for a webinar with Penn Energy. ARC Advisory Group will share best practices in transforming Engineering into agile, optimized operations delivering on Industry 4.0 outcomes.


Digital transformation, Industrial IoT and Industry 4.0 have all created the awareness and need for change.  Cloud services have re-defined process engineering and how changed the business dynamic for engineering organizations and the role of IT service and shared service organizations.

  • Provides ubiquitous infrastructure, enabling you to run simulations at any time, location, or on any device
  • Enables you to begin optimizing design right away without having to develop special skills
  • Provides all the benefits of engineering software without the overhead of installation, deployment, version control, and hardware maintenance
  • Increases engineering design agility
  • Doesn’t require long IT upgrade and computer refresh cycles to enable you to leverage the latest engineering software features and capabilities
  • Makes it easy to deploy models securely to enable you to share engineering models with partners and suppliers

Slide by AVEVA Digital TransformationSlide by AVEVA Digital Transformation

 

“Reprinted with permission, original blog was posted here”. You may also visit here for more such insights on the digital transformation of industry.

 About ARC Advisory Group (www.arcweb.com): Founded in 1986, ARC Advisory Group is a Boston based leading technology research and advisory firm for industry and infrastructure.

For further information or to provide feedback on this article, please contact lkanickaraj@arcweb.com

 About the Author:

Peter Reynolds

Peter performs research into process and technology areas such as optimization, asset performance management, and data analytics. He brings more than 25 years of professional experience as an oil and gas subject matter expert. 

Peter is a distinguished thought leader, strategist, and speaker, with an extensive history of practical experience in refinery automation, safety, and IT.  He has also published several whitepapers related to digital transformation and frequently speaks at technical conferences throughout North America and the EMEA region.

Most of the IT decision makers are of the opinion that AI is badly needed in BFSI segment and will lead to drastic improvement in operational efficiency and will fundamentally transform the core financial processes.

 

Conversational AI technologies are the way to go as they facilitate two-way interactions and allow banks to establish long term relationships with customers by providing seamless experience covering multiple banking aspects throughout the year.

 

With time the Conversational AI technologies will slowly reach a level towards an automated response system to contextualize and personalize conversations with their customer on-demand and in real time.

 

Major sub-segments to influence AI implementation in BFSI sector:

  •        Chatbots or Voicebots: This AI technology is programmed to converse with the customers on a 24x7 basis. It is programmed to self-learn and conduct intelligent conversations with humans over chat or audio.
  •        Robo Advisors: This AI technology uses different  platforms to offer investment advice to customers covering multiple issues with zero human intervention.
  •        Emotion AI: It is a branch of AI which helps in enabling machines to detect human emotions using advanced facial and voice recognition technologies. It further makes use of this information to provide advice to the customers depending on their mood and state of mind.
  •        Data Analysis: The AI systems record multiple data and use them to evaluate problems related to its own applications as well as to solve problems related to different areas of banking.

 

AI will enhance trustful conversations

 

One of the primary aspects associated with AI in banking is its ability to supercharge its customer using multiple touch points. The coming years will witness deployment of AI oriented Chatbots and voicebots which can interact with multiple customers using the most advanced conversation options. Voicebots are expected to be top priority with 52% banks surveyed planning for AI deployments in the next one year followed by chatbots with 48%.

 

Some of the major Indian banks to implement these services include:  

  •        State bank of India: SBI’s inTouch chatbot is using AI technology to address various customer queries powered by IBM Watson. The chatbot provides information on various banking products and services like loans, term deposit and many more. 
  •        HDFC Bank: The bank has introduced its own conversational banking chatbot called EVA exclusively for its customers. This AI powered virtual assistant chatbot designed by Bangalore-based AI start-up Senseforth is capable of answering millions of customer queries covering multiple channels.
  •        ICICI bank: The bank’s in-house build iPal chatbot has the capability to serve multiple customers to offer assignment on micro transactions like bill payment and fund transfer. Within a short span of time the chatbot has handled more than 6 million queries on its portal and mobile app iMobile providing information on multiple aspects with close to 90 percent accuracy. The bank claims that it is the only AI led Indian chatbot service available on the bank’s website as well as mobile application.

 

To Conclude

The main objective of most AI technology adopted by banks in India is to offer a more enhanced, proactive and personal customer experience at a lower cost. As the banks get more comfortable with these newly deployed AI systems, they will start using them more and more for back-end business processes which will help to reduce human error and improve turn-around-time where manual processes are required.

 

To know more, download our latest research paper titled “AI for BFSI”: Artificial Intelligence for Banking, Financial Services & Insurance Sector 

In Part 2 of this article on the legal landscape related to smart manufacturing, we take a look at legal issues surrounding data ownership, data privacy, as well as the implications of artificial intelligence, which is rapidly making inroads to the manufacturing arena. 

 

The inspiration for two blog posts (Part 1 and this Part 2) on a much less discussed aspect of Industry 4.0 comes from last month’s Smart Manufacturing, 3D Printing & Industry 4.0 Forum event in Singapore, where ARC Advisory Group chaired a panel, Future of Manufacturing: The Emerging Legal Challenges, comprising lawyers Matt Pollins, partner and head of commercial/TMT at legal firm CMS Singapore and Wong Hong Boon, manufacturing and supply chain legal counsel at 3M Singapore, along with Ani Bhalekar, Head of IoT/Industry X.0 & Mobile Practices for ASEAN+, Accenture; CK Vishwakarma. CEO/Founder of IoT interest group IoTSG; and Dean Shaw, Industry Solutions Director, Microsoft. 

What follows are the questions posed by ARC to the panel in the second half of the session and a summary of the subsequent discussions. 

 

Q: Industrial IoT systems can involve data transfer between multiple parties; for example, data from a plant owner’s machine sent to the machine manufacturer and also to a third-party service organization. In such cases, who owns the data? 

The straight legal answer is that there are question marks over whether you can really own data as an intellectual property. So that means because data is not easy to own, you need to get it nailed down in the contract between the customer and the vendors on who can control it. 

 

But it is a huge issue – who owns or who should own the data. In a large autonomous vehicle project in the mining industry, the deal almost did not happen because the vehicle supplier wanted to own the data, so as to improve their product in perpetuity. The customer’s perspective was that it’s my mining site, why should you own the data – we want the data. The commercial answer really lies in negotiating those two opposing views.  

 

Generally, when we talk about data in the manufacturing context, there are many possible types of data, such as design data, operator data, data generated from the processes, etc. So when you start discussing issues around data ownership it is important to keep in mind that not all data are the same. Also important is to assess whether the data in question actually has value and understand how the data would or could be used by another party.

 

Among C-level executives, data ownership is often a red herring i.e. an unnecessary distraction. That’s because the perception of ownership comes from private property – your car, your house, etc. – with which you can do whatever you want. And that leads to a preconceived notion that you also have the ability to deal with “your data” any way you can. But really, it’s much more about the rights of the parties involved and what they can and cannot legally use the data for. 

 

Q: We are hearing a lot about the European Union’s new General Data Protection Regulation (GDPR). Is this something that companies in Southeast Asia need to be concerned about?

There is some misunderstanding among some companies that because this is an EU regulation it does not apply to this region; however, that’s not quite true. GDPR mandates that companies which collect or process data of EU nationals are required to follow certain regulations. The most onerous is that if there is a data breach you need to inform regulators within 72 hours. The penalty for contravening GDPR regulations – four percent of worldwide turnover – is also not trivial. So if you are a company operating in, say, Singapore and you collect or process data of EU nationals, you need to comply with the GDPR.      

 

Do note, however, that the GDPR only concerns personal data – data that relates to individuals; it does not apply to operational data from a machine or a process. If you are a manufacturing company considering adoption of, say, an IIoT-based predictive maintenance solution for a machine, you don’t need to worry about GDPR because there is no personal data involved. You should still be concerned about data confidentiality and security, because you may not want competitors to know about your plant performance, but these are not data privacy issues relevant to GDPR. 

 

While most of information used in manufacturing does not involve personal data, there are certain categories where personal data may be involved. For example, increasingly, manufacturers are  making products to order for individual consumers. Because you might deliver the product directly to the end customer, you need to hold the addresses and other relevant information of individuals, and these are personal data. Hence, make sure you analyze your processes thoroughly in order to establish all instances of personal data collection and processing.  

Legal issues in manufacturingWhile GDPR does not apply to operational data from a machine or a process, if you are collecting personal data on EU nationals you need to comply, even if your company is located outside of Europe.  

 

Q: With the prevalence of wearable technology, it’s becoming common to track the activity and location of operators in the plant. Is the ensuing data classified as personal data? 

Yes, it is personal data but the organization can use it because you probably consented to that as part of your employment agreement. Generally, you can demand to know what data your company holds about you. Some countries in Southeast Asia do have what’s called data subject access rights.  

 

In a mining industry project involving 1800 contractors on site, it was important to make sure the contractors were billing for the correct hours and executing the assigned work orders. This led to an Industrial IoT solution for worker location tracking. While adoption was voluntary for the contractors, there was universal take-up because the solution also kept workers safer in a mining environment with a known history of worker injuries and equipment damages. Probably because of the clear benefits to the workers and to the enterprises, there were no great concerns raised  about data privacy and what could perhaps be perceived as rather invasive use of technology.     

 

Q: Considering the burgeoning area of artificial intelligence, are there certain legal issues that we should be concerned about? For example, the question of liability in the case of an industrial accident involving an AI based robot or autonomous vehicle? 

In this scenario, it’s really about causation – what caused the issue? Is it a malfunction with the machine? Did your supplier program the machine incorrectly? Is it an individual who wandered into an area that he was not supposed to be in? Is it your fault for telling the worker he was not supposed to go there? So it really depends on what caused the accident and what legal remedies you may have against the vendor, if any.

 

You should already have a contract between you and the machine supplier specifying who is responsible in different situations and that should generally be sufficient. Most categories of liabilities have to do with negligence and there are already laws covering negligence, and also criminal laws if it was a deliberate act on the behalf of a programmer. So in the context of AI, you have existing laws  covering these different scenarios and so we don’t really need to introduce new laws to stop machines being evil.  

 

For manufacturers starting to adopt AI based systems, it is important to understand the decisions coming from artificial intelligence and what suppliers are building into their algorithms. These are aspects to consider when talking to potential vendors and developing the subsequent contracts with a selected supplier.
 

 

“Reprinted with permission, original blog was posted here”. You may also visit here for more such insights on the digital transformation of industry.

 About ARC Advisory Group (www.arcweb.com): Founded in 1986, ARC Advisory Group is a Boston based leading technology research and advisory firm for industry and infrastructure.

For further information or to provide feedback on this article, please contact lkanickaraj@arcweb.com

 About the Author:

Bob Gill

Bob joined ARC Advisory Group in 2014 after a decade-long career in industrial technology media, most recently as Editor-in-Chief at Singapore's Contineo Media, where he had editorial management responsibility for Control Engineering Asia, Asia Food Journal, PharmaAsia, Logistics Insight Asia, and Payload Asia, while also concurrently being Editor of Control Engineering Asia.

 

Two lawyers walk into a smart manufacturing conference. Sounds like the beginning of a bad joke, right? Well, no, actually, it was the reality last month when lawyers Matt Pollins, partner and head of commercial/TMT at legal firm CMS Singapore, and Wong Hong Boon, manufacturing and supply chain legal counsel at 3M Singapore, joined a panel chaired by ARC Advisory Group at the inaugural Smart Manufacturing, 3D Printing & Industry 4.0 Forum event, in Singapore, to discuss legal challenges emerging from the rising adoption of technologies like cloud, internet of things, and artificial intelligence in the manufacturing sector. 

 

On stage along with the lawyers and contributing to what was a vibrant and engaging panel session: Ani Bhalekar, Head of IoT/Industry X.0 & Mobile Practices for ASEAN+, Accenture; CK Vishwakarma. CEO/Founder of IoT interest group IoTSG; and Dean Shaw, Industry Solutions Director, Microsoft.

iotlaw4.jpgFuture of Manufacturing: Emerging Legal Challenges panel session.  

 

With the aim of illuminating a tangential aspect of Industry 4.0 little understood by manufacturing end users or by many technology suppliers, I posed a number of questions to the panel and what follows is a summary of the discussions in the first half of the one-hour session. Part 2 of this article, next week, details issues of data privacy, data ownership, and the legal implications of artificial intelligence, which the panel also debated and discussed.  

 

Q: Why do we need to talk about legal issues at a smart manufacturing conference?

Because the technologies and projects discussed during the two days of this forum do give rise to legal issues – whether its cybersecurity, privacy, who owns the data, intellectual property, etc. And also, many deals are not happening and projects are being blocked because of concerns about the legal implications. However, quite often, the perception of the extent of the legal challenges is worse than the reality. Yes, there are issues, but with common sense and good business practice you can navigate your way through. 

 

Another reason is that clients often come to legal counsel too late in the process, when there is already a done deal. When the legal issues come up at this stage, counsel can get a bad reputation as a type of business prevention unit. So we need to emphasize that legal is not scary; there are legal obligations that companies need to meet but if you work with your lawyers from Day 1, you can mitigate the pain points right from the beginning and avoid any impact on project implementation. There is still a job to be done to raise awareness of legal issues related to smart manufacturing and so this conference topic is very timely. 

 

Q: How can and how should lawyers get involved in smart manufacturing initiatives?

Lawyers are primarily concerned with liability of your organization and can advise on key issues such as: What are you liable for to your customers? What liability needs to be borne by your vendors? What are your liable for to the regulators? What are the different jurisdictions (and their implications) of all the countries in which you are operating?

It is very much about legal working with the business to come up with the right processes, policies and having the right people in place to implement the whole smart manufacturing process. The law should not be seen as a hindrance but rather a framework to help define risks and responsibilities and should certainly not stop you from innovating in this sector.  

 

When we talk about the law, traditionally, it’s about managing the risk of a transaction. But the definition of risk is changing: risk used to be adopting new IoT technology or moving data to the cloud etc, but now risk is standing still and letting others overtake you. Legal counsel’s partnership with the business is perhaps not fully evolved yet but it is getting better and faster. Indeed, there has been progress over the last 10 years, with the often acrimonious relationships with in-house counsel at manufacturing and technology companies being replaced by lawyers thinking a lot more about their role in facilitating desired business outcomes. 

 

Q: Cloud adoption is increasingly rapidly. Are there legal issues that companies should be concerned with around this trend? 

There are often questions related to the geographical location of data. For example, I have a manufacturing plant in the Philippines and store data from that facility in Singapore  – so which country’s jurisdiction do I need to be concerned about? Well, the plant itself would be subject to Philippines data protection laws that specify how and whether you can transfer data outside of the country. When the data lands in Singapore additional considerations arise like law enforcement access to the data i.e. can the police can come to the data center and ask to see the data? So the answer is that once data is in the cloud, several different jurisdictions can apply but first and foremost, you would be concerned with where the data originates, which in this case is the Philippines. 

 

With cloud, when it comes to the issue of data sovereignty, there can be a gap in CTO understanding  in terms of what they can and can’t do with the data, and this is an example where commercially oriented legal counsel at cloud technology providers can help get customers comfortable with migrating data to the cloud and what data in certain cases mat need to be kept on premise.  

 

Microsoft is a good example because the company almost uses legal as a sales tool. Because a lot of people thought that you could not adopt cloud if you are manufacturing in, say, Indonesia, it has developed a website that summarizes the regulations that apply to cloud across the region. It enables companies to cut through the regulatory landscape and helps them realize they can do more things with the cloud than they had envisaged.      

 

“Reprinted with permission, original blog was posted here”. You may also visit here for more such insights on the digital transformation of industry.

 About ARC Advisory Group (www.arcweb.com): Founded in 1986, ARC Advisory Group is a Boston based leading technology research and advisory firm for industry and infrastructure.

For further information or to provide feedback on this article, please contact lkanickaraj@arcweb.com

 About the Author:

Bob Gill

Bob joined ARC Advisory Group in 2014 after a decade-long career in industrial technology media, most recently as Editor-in-Chief at Singapore's Contineo Media, where he had editorial management responsibility for Control Engineering Asia, Asia Food Journal, PharmaAsia, Logistics Insight Asia, and Payload Asia, while also concurrently being Editor of Control Engineering Asia.

Authors:

 

Srikar Reddy

Managing Director and Chief Executive Officer, Sonata Software Limited 

 

Mauro F. Guillén

Anthony L. Davis Director of The Lauder Institute, Dr. Felix Zandman Professor of International Management, The Wharton School, University of Pennsylvania

 

 

Artificial intelligence (AI) relies on big data and machine learning for myriad applications, from autonomous vehicles to algorithmic trading, and from clinical decision support systems to data mining. The availability of large amounts of data is essential to the development of AI. Given China's large population and business sector, both of which use digitized platforms and tools to an unparalleled extent, it may enjoy an advantage in AI. In addition, it has fewer constraints on the use of information gathered through the digital footprint left by people and companies. India has also taken a series of similar steps to digitize its economy, including biometric identity tokens, demonetization and an integrated goods and services tax.

 

But the recent scandal over the use of personal and social data by Facebook and Cambridge Analytica has brought ethical considerations to the fore. And it's just the beginning. As AI applications require ever greater amounts of data to help machines learn and perform tasks hitherto reserved for humans, companies are facing increasing public scrutiny, at least in some parts of the world. Tesla and Uber have scaled down their efforts to develop autonomous vehicles in the wake of widely reported accidents. How do we ensure the ethical and responsible use of AI? How do we bring more awareness about such responsibility, in the absence of a global standard on AI?

 

The ethical standards for assessing AI and its associated technologies are still in their infancy. Companies need to initiate internal discussion as well as external debate with their key stakeholders about how to avoid being caught up in difficult situations.

 

 

Consider the difference between deontological and teleological ethical standards. The former focuses on the intention and the means, while the latter on the ends and outcomes. For instance, in the case of autonomous vehicles, the end of an error-free transportation system that is also efficient and friendly towards the environment might be enough to justify large-scale data collection about driving under different conditions and also, experimentation based on AI applications.

 

By contrast, clinical interventions and especially medical trials are hard to justify on teleological grounds. Given the horrific history of medical experimentation on unsuspecting human subjects, companies and AI researchers alike would be wise to employ a deontological approach that judges the ethics of their activities on the basis of the intention and the means rather than the ends.

 

Another useful yardstick is the so-called golden rule of ethics, which invites you to treat others in the way you would like to be treated. The difficulty in applying this principle to the burgeoning field of AI lies in the gulf separating the billions of people whose data are being accumulated and analyzed from the billions of potential beneficiaries. The data simply aggregate in ways that make the direct application of the golden rule largely irrelevant.

 

Consider one last set of ethical standards: cultural relativism versus universalism. The former invites us to evaluate practices through the lens of the values and norms of a given culture, while the latter urges everyone to live up to a mutually agreed standard. This comparison helps explain, for example, the current clash between the European conception of data privacy and the American one, which is shaping the global competitive landscape for companies such as Google and Facebook, among many others. Emerging markets such as China and India have for years proposed to let cultural relativism be the guiding principle, as they feel it gives them an edge, especially by avoiding unnecessary regulations that might slow their development as technological powerhouses.

 

Ethical standards are likely to become as important at shaping global competition as technological standards have been since the 1980s. Given the stakes and the thirst for data that AI involves, it will likely require companies to ask very tough questions as to every detail of what they do to get ahead. In the course of the work we are doing with our global clients, we are looking at the role of ethics in implementing AI. The way industry and society address these issues will be crucial to the adoption of AI in the digital world.

 

However, for AI to deliver on its promise, it will require predictability and trust. These two are interrelated. Predictable treatment of the complex issues that AI throws up, such as accountability and permitted uses of data, will encourage investment in and use of AI. Similarly, progress with AI requires consumers to trust the technology, its impact on them, and how it uses their data. Predictable and transparent treatment facilitates this trust.

 

Intelligent machines are enabling high-level cognitive processes such as thinking, perceiving, learning, problem-solving and decision-making. AI presents opportunities to complement and supplement human intelligence and enrich the way industry and governments operate.

 

 

However, the possibility of creating cognitive machines with AI raises multiple ethical issues that need careful consideration. What are the implications of a cognitive machine making independent decisions? Should it even be allowed? How do we hold them accountable for outcomes? Do we need to control, regulate and monitor their learning?

 

A robust legal framework will be needed to deal with those issues too complex or fast-changing to be addressed adequately by legislation. But the political and legal process alone will not be enough. For trust to flourish, an ethical code will be equally important.

 

The government should encourage discussion about the ethics of AI, and ensure all relevant parties are involved. Bringing together the private sector, consumer groups and academia would allow the development of an ethical code that keeps up with technological, social and political developments.

Government efforts should be collaborative with existing efforts to research and discuss ethics in AI. There are many such initiatives which could be encouraged, including at the Alan Turing Institute, the Leverhulme Centre for the Future of Intelligence, the World Economic Forum Centre for the Fourth Industrial Revolution, the Royal Society, and the Partnership on Artificial Intelligence to Benefit People and Society.

 

 

 

This blog was originally published in the World Economic Forum.

https://www.weforum.org/agenda/2018/07/we-know-ethics-should-inform-ai-but-which-ethics-robotics

 

Artificial Intelligence and Machine Learning have the potential to revolutionize DevOps productivity and business efficiency. DevOps model symbolizes as a mean to accelerate development efforts and deliver new applications. Ability to build applications with the latest production data, deliver updates quickly with more application testing in less time, speeding up the release cycle and integration testing and reducing the restoration time are the main requirements for DevOps. DevOps practices results in large amount of data and to analyze such huge data technologies like Machine Learning and Artificial Intelligence are required.

 

Artificial Intelligence and Machine Learning are providing solutions to optimize the DevOps processes. It helps to keep track of production performance, establish links to previous problems and how effective were the solutions that was provided. AL mad Ml can perform automating routine and repeatable actions of DevOps with enhanced efficiency to improve the performance of teams and business.

 

AI has huge impact on DevOps in the following ways:

 

Improves data accessibility: AI can help DevOps team in collecting data from multiple sources and preparing it for reliable and robust evaluation. Artificial Intelligence resolves the lack of unstructured accessibility to data by releasing data from its formal storage which is the major issue in DevOps team.

 

Efficient implementation capability: AI allows teams to switch from rules-based human management system to self-governed systems. Thus improving the efficiency by resolving the complexity of assessing human agents.

 

Adequate utilization of resources: AI helps in minimizes the complexity of managing resources since AI has the capability of automating routine and repeatable tasks.

 

Ways in ML can optimize DevOps:

 

Analyzing and tracking application delivery: Application of ML on DevOps tools identify the irregularities in the long development duration, slow release-rates to uncover software development wastes, inefficient resourcing and slow down processes.

 

Securing application quality: ML can efficiently build a test pattern by analyzing the output from testing tools. It can review QA results and ensure testing on every release and increasing the quality of delivered applications.

 

Managing productions: ML helps in analyzing the general patterns and detect irregular patterns like memory leaks and race conditions. This technology helps in evaluating the large amount of data occurred during production.

 

Predicting application and server failure: Depending on the previous data, Machine Learning can be used employed to predict future server and application failure. Thus it helps in reloading the servers before failure occurs and preventing downtime.

 

Log analysis: Analyzing logs enables machines to identify patterns and make decisions. This can also be used for security and performance.

 

See also: Top tips for maintaining data privacy on your smartphone.

 

Organizations can employ AI and ML to optimize their DevOps environment. These technologies helps in managing complex data and thereby utilizing it in the app development process. Applications of AI and ML can result in ROI of a company by enhancing DevOps operations and making it more responsive. They can improve efficiency as well as productivity of the team. It helps DevOps teams to focus on creativity and innovation by eliminating the issues in the processes and enabling teams to manage the available data.

The IIoT edge environment, where enterprise-level IT processes interact with the production-driven OT landscape, currently represents one of the largest areas of opportunity within the overall activity surrounding digital transformation.  The sheer number of edge initiatives pursued by IT and OT suppliers, as well as standards committees, trade organizations and other industry participants, is but one indicator of the perceived potential for edge activities to drive incremental performance improvements.

After listening to a recent radio interview with a musician inspired by acclaimed cellist Yo-Yo Ma’s Silk Road project, I was struck by the parallels between his analysis of the edge effect on music and culture and the current focus on IIoT edge activities.  To paraphrase, he cites the edge effect in ecology as the point where two adjoining ecosystems meet to create greater species diversity and new life forms, blending disciplines to make new discoveries.   In his work as an artist, this translates to advocating the influence of a variety of global cultures, including ancient ones, to create new works that would not be possible if musicians worked solely within their own ecosystem. 

Yo-Yo MaAcclaimed Cellist Yo-Yo Ma
(Source:  www.npr.org)

 

In another example he extends this concept to advocating the need to add Arts to the current STEM (Science-Technology-Engineering-Math) emphasis in education, or migration from STEM to STEAM.  In his view, this type of “edge integration” results in innovative new possibilities due to the collaboration, flexible thinking, disciplined imagination, and innovation required.

What strikes me is the parallels between these assertions and the business value propositions inherent in the IIoT edge activity:  product and service innovation, new business and revenue opportunities, entirely new service-oriented business models in traditionally capital-intensive industries, and new insights from legacy installations, among others.  These parallels are most obvious if we view the OT and IT environment as separate ecosystems that have traditionally operated solely within themselves, with limited interaction. 

The edge effect concept as it relates to the IIoT is just beginning to reveal the significant opportunity for business value improvement and potentially disruptive innovation.  This speaks to the need for continued efforts toward connectivity and collaboration between IT, OT, design, service, and other enterprise entities, as well as continued focus on the primary business objectives driving the collaboration.  These efforts in turn will allow businesses to leverage the edge effect for new and exciting growth opportunities.

Yo-Yo Ma Blog on the Edge Effect

The Silk Road Project

 

Reprinted with permission, original blog was posted here”. You may also visit here for more such insights on the digital transformation of industries.

 About ARC Advisory Group (www.arcweb.com): Founded in 1986, ARC Advisory Group is a Boston based leading technology research and advisory firm for industry and infrastructure.

 For further information or to provide feedback on this article, please contact akanagali@arcweb.com

 

About the Author:

 

Chantal Polsonetti

Vice President, Advisory Services 

Chantal's primary activities include working with the ARC teams covering the Industrial Internet of Things (IoT), industrial networks, intelligent train control and rail signaling, and other topics.

 

 

About ARC Advisory Group (www.arcweb.com)-  Founded in 1986, ARC Advisory Group is a Boston based leading technology research and advisory firm for industry and infrastructure.

This past week Fiat Chrysler Auto (FCA) announced the recall of 4.8 million US cars to install revised software in their Engine Control Modules (ECMs). A few years ago the same manufacturer had to recall over 1 million vehicles when cybersecurity researchers showed that they could penetrate the car’s control systems remotely using the vehicle’s connected infotainment system.

 

These 2 recalls show two sides of the connectivity “coin”. In one case connected car systems were not securely isolated. In the other case, no connectivity was available, so a software upgrade required a service procedure on each vehicle. With respect to connectivity, poor FCA seems damned if they do and damned if they don’t. If both of these recalls illustrate the way not to do things, then what is the right way to plan for systems of the future? This question is equally relevant to industrial IoT deployments and to industrial automation systems of the future, not just to cars. I’m propose 2 key lessons from this debacle:

Connectivity is a Must

Automotive Product Recalls and ConnectivityFirst, suppliers can no longer afford to deploy disconnected products or systems. If the ECM upgrade service costs $200 per vehicle (my estimate), then FCA will spend nearly $1 billion on this recall. That figure will subtract from their bottom line. Ouch. And that billion will not add one bit of additional functionality to the vehicles, and the same cars are still subject to a new vulnerability that would have to be remedied the very same costly way. In hindsight (always easy, I admit) an earlier investment in product designs with secure connectivity and remote software update capability would have saved almost all of that billion dollars, and would greatly reduced the financial risk of future incidents.

 

Do you think you can’t afford the risk of designing secure connectivity into your products and systems? Then ask yourself how much it would cost you (and your customers!) to repair a similar software vulnerability in your installed base. Or do you simply hope this never happens to you?

 

Business Process Disruption

Second, business processes need to be re-imagined in order to realize the potential value of connected assets. Cars by EV manufacturer Tesla designed in the capability to upgrade vehicle software remotely, probably because they knew that with entirely new products this would be necessary. How many of today’s industrial IoT or industrial automation installations can do the same? Not many, I would guess. But this capability represents a feature that improves service levels while driving down service costs compared with today’s antiquated service processes.

 

Software for industrial IoT and industrial automation systems of the future will be remotely serviceable, All software. Yes, I mean all. So don’t shackle your products (and your customers!) to the outdated, costly, and labor-intensive service models of the past.

 

“Reprinted with permission, original blog was posted here”. You may also visit here for more such insights on the digital transformation of industry.

 About ARC Advisory Group (www.arcweb.com): Founded in 1986, ARC Advisory Group is a Boston based leading technology research and advisory firm for industry and infrastructure.

For further information or to provide feedback on this article, please contact lkanickaraj@arcweb.com

 About the Author:

Harry Forbes

Harry is ARC's lead analyst for the Distributed Control System (DCS) market. In addition, ARC leverages Harry's utility expertise in its coverage of Smart Grid, industrial networking, networking and communication standards, and the electric power vertical industry. His research topics include the DCS integration and architecture, Smart Grid, Smart Metering, energy storage, industrial wireless, industrial Ethernet and emerging network technologies.

Despite all the market focus on the industrial edge, many companies struggle to identify precisely where (or even what) it is. This makes it difficult to deploy an effective operational analytics strategy. When considering an industrial edge environment, it is helpful to think about it from both operating and network infrastructure perspectives.

 

Operational Industrial Edge

The operational edge is the most straightforward of the two edge environments. It’s the logical operating endpoint of a business and, as such, it is relatively easy to understand, though it can “shift,” based on who within the organization is defining the edge (enterprise, customer, operations, etc.).

 

The logic transfers to industrial and infrastructure environments. A mining company considers a site, and equipment within it, to be the operational edge of its business. For oil and gas, it could be a platform or well and the related equipment, such as flare stacks, pipelines, and pumps.

 

For a manufacturer, the operational edge within the plant consists of machines and equipment, such as a material handling robot, a metalworking press, or capacitor. For an electric utility with generating assets, the edge would be a turbine within a coal- or gas-fired plant or a gearbox on a wind turbine. If the same utility transmits and distributes that electricity, the operational edge could extend to substations, transformers, and, perhaps, electric-vehicle charging stations. As infrastructure extends into a community, the operational edge becomes heavy- or light-rail trains and the engines, and brakes within them, or elevators within facilities.

 

In each of these examples, the edge equipment fulfills a limited, and sometimes isolated, critical purpose within a larger operational process.  The context of the equipment performance within operations is well understood.

 

Network Industrial Edge

In contrast, the network industrial edge is not as evident to industrial operators. Much of the confusion emanates from operations personnel who view technology through the lens of long-standing, refined processes, including distributed process control, programmable logic control, and other automation applications.  That often leads to the misperception that IIoT processes are similar in construct and value to what they have successfully done for decades.

Industrial Edge Key Components

That network view is informed by data sharing, hierarchies, and systems that proceeded the development of the industrial internet.  In the traditional OT world those systems were isolated within operational process siloes, and so IT was considered outside of that purview and, often, a barrier to efficient operations.  

 

In fact, defining the network edge begins as an IT endeavor, but it doesn’t end there. It starts with cloud computing, based on the development of internet-enabled data sharing and the creation of smart devices. The cloud became the “centralized” environment from which the “edge” could then be identified and defined. The network edge was viewed as the equipment and systems that fed data to it. That definition is too simplistic for today’s industrial purposes.

Like its operational counterpart, the industrial network edge occurs at the logical extremes of an information technology network. It consists of the equipment and devices capable of data communication, management (e.g. security, visualization, preparation, storage), and/or computing.

 

The network infrastructure edge can include a range of technologies, and this ecosystem diversity is often an additional point of confusion. It is often a mix of traditional IT equipment and devices as well as purpose-built industrial systems, and technology such as sensors and actuators can be added. These many technologies can be introduced into an industrial environment as individual components or within a network. Their functionality can also be embedded within operational infrastructure, such as turbines, vehicles, robots, etc.  

 

One IIoT to Rule Them All and Somewhere in the Business Bind Them

Okay, so not quite as eloquent at Tolkien. The Industrial Internet of Things (IIoT) now binds together operational and network infrastructure edges into a cohesive whole. That’s one part of the whole IT/OT convergence thing you might have heard a little about…

 

“Reprinted with permission, original blog was posted here”. You may also visit here for more such insights on the digital transformation of industry.

 About ARC Advisory Group (www.arcweb.com): Founded in 1986, ARC Advisory Group is a Boston based leading technology research and advisory firm for industry and infrastructure.

For further information or to provide feedback on this article, please contact lkanickaraj@arcweb.com

 About the Author:

Michael Guilfoyle

Michael's expertise is in analysis, positioning, and strategy development for companies facing transformational market drivers.  At ARC, he applies his expertise to developments related to Industrial Internet of Things (IIoT) and advanced analytics, including machine learning.

We are planning a report on chatbots to be released soon. If you have a chatbot solution, here is a chance to get it featured in the report. Download the attached form, fill it up and send it to Kshitiz Arora at Kshitiz@nasscom.in  If you have any queries, let us know in the comments below. 

Reiterating our commitment towards nurturing emerging technologies, particularly AI, and leveraging the power of Data Science, NASSCOM along with the Karnataka Government, launched the Centre of Excellence for Data Science and Artificial Intelligence (CoE – DSAI on the 5th of July. The centre was inaugurated by Mr. KJ George, Minister for Large & Medium Scale Industries, IT & BT, Science & Technology, Government of Karnataka, in the presence of other eminent dignitaries: Mr. Gaurav Gupta, Principal Secretary, IT,BT and S&T, Government of Karnataka; Nivruti Rai, Country Head, Intel India and VP, Data Center Group, Intel Corporation; Nipun Mehrotra, Chief Digital Officer (CDO) for India/South Asia, IBM along with Debjani Ghosh, President NASSCOM and Rishad Premji, Chairman, NASSCOM.  Intel & IBM are the founding members.

 

The Grand Vision: To be the Data Science & Artificial Intelligence accelerator and establish India to be amongst the top 3 global DSAI destinations by 2022.

 

The Mission: Improve & position Karnataka's capabilities in Data Science & Artificial Intelligence by enabling development of innovative applications & research, promote usage of DSAl for both businesses and governments, develop appropriate skills & talent and enable policies that accelerate creation and adoption of DSAl solutions for a better society.

 

The CoE is focused on Smart Manufacturing, Automotive, Healthcare, Agriculture, Energy, IoT, BFSI, Retail & Telecom among others. AI feeds on very large and credible data sets, which is what this centre will provide for training models for both Cloud & On-premise technologies. In addition, Industry-wide Subject Matter Experts will also double up as mentors.

 

Catalyst to:

 

  • Collaborative sandbox - innovation & problem solving
  • Assist in job creation to sustain industry’s growth
  • Platforms for best minds - industry, academia, start-ups/SMEs and government
  • Training, mentoring & sharing best practices
  • Co-creation and research – accelerate India's economic development

 

Know more about NASSCOM's initiatives and programs, follow NASSCOM impact stories

The Centre of Excellence for IoT located at Bangalore has focused on a very active co-creation program that brings together innovative solutions across the partners. Some of the use cases include:

 

Co-creation with John Deere:

  • Fork lift tracking on shop floor to understand actual utilization of lift
    • As fork lift moves on the shop floor, the actual utilization is only when it is moving with load. When it is stationary or moving without load, it is not actually being utilized. The project is to understand for what actual duration the lift is moving, standing still and moving with load to analyze if actual utilization is proper or can be improved.
  • Mobile phone based assistance for shop floor manager for any data using voice query.
    • To allow the shop floor manager to use his mobile phone for voice based query where application converts the query into text, fetches data from backend and provides answer back in voice. Helps the manager to not having to go to his office to get the required answer.
  • Part identification using imaging and AI
    • Identification of automotive part using image processing and AI. Needed to address situation where parts look similar to each other and mistakes get made, of using wrong part in assembly line.
  • Real time tracking of parts from supplier to consumption

 

Sheela Foam:

  • Automating quality inspection using image processing
    • Replacing the manual quality inspection by camera-based quality inspection to improve the inspection efficiency

 

Toshiba:

  • Reading of meters using mobile phone based image processing

There are old industrial meters where reading is a manual process. Objective is to develop solutions that can use mobile phone to take picture, perform processing to get the reading and upload on server.

 

Social (Akshya patra – Mid-day meals to children)

 

  • Food quality monitoring – using temperature and location parameters to leverage technology from Locanix and Accenture.

 

Know more about NASSCOM's initiatives and programs, follow NASSCOM impact stories

In June 2018, NITI Aayog came out with the National Strategy for Artificial Intelligence, whereby there were four areas highlighted to harness the full power of AI for India. These are: Research – Core & Applied; Skilling & Re-skilling; Accelerating Adoption; and, Ethics, Privacy & Security.  

 

Both NITI Aayog and NASSCOM have agreed to work and promote AI & Machine Learning towards Applied Research, Accelerating Adoption, Ethics, and Privacy & Security. 

 

This was a great opportunity for us to sign an MoU with NITI Aayog on 5th July, to collaborate and advance AI adoption in India; facilitate key stakeholder interaction; Share knowledge; and, organize summits and meetings to disseminate learnings and reach out to a broader audience. Additionally, it’s a substantive boost to NASSCOM’s recently launched Futureskills platform which aims to prepare 1.5 million professionals in the areas of niche technology in the next 4 – 5 years.   

 

Scope of Work

 

  • An India AI portal to host information about AI & Machine Learning.
  • Focussed effort to get skilled women job-ready in AI. NASSCOM’s Women Wizard Rule Tech (WWRT) is a focussed endeavour.
  • Identify global platforms to showcase India’s potential in AI.
  • Explore & facilitate setting up of National AI Marketplace.
  • Setting up of think-tanks.
  • Facilitate Research.

 

NASSCOM’s Levers

 

  • 10k Programme, NASSCOM Product Conclave (NPC), the Deep Tech Clubs, Incubators in NCR & Bengaluru.
  • Newly established CoE – DS & AI with the Govts. Of Karnataka & Telangana. They will conduct PoCs for real life problems for corporate and govt. NITI Aayog to be the Knowledge Partner.
  • NASSCOM events which have global outreach.
  • NASSCOM Research which forms the bulwark of Thought Leadership.
  • Futureskills Platform.

 

 Know more about NASSCOM's initiatives and programs, follow NASSCOM impact stories