The first industrial revolution marked the transition from traditional hand production to machine-driven production. The second one was characterised by rapid industrialization driven by technological innovations of the day, moving from small production units to massive factories where hundreds of people worked at any given point of time. The third industrial revolution could be credited to the World Wars – perhaps the only good to come out of the carnage and destruction. This IR saw the introduction of computers and robots to aid production, resulting in greater efficiency, throughput and quality.
We are now in the fourth revolution, which is commonly known as Industry 4.0.
Industry 4.0 has been powered by a significant number of factors. All transitions have been driven by economic considerations, but 4.0’s turn cannot be explained so simply. In the past few years, technology has evolved both in power and application; businesses are now even more complicated entities, involving a range of stakeholders right from the governments of the states they operate in to local communities who now have an active say in how their resources are consumed; customers, both corporate and individual, are more demanding. This is no longer the world in which “any color, as long as it is black,” is an acceptable business policy.
In an old world once, technology meant tethering beasts to flywheels to grind flour. Now it means everything – from the solar panels that power the motor, to the sensors that determine how finely the flour has been ground, to the robots that pack it for dispatch, to the climate-controlled trucks used for transport, to electronic billing and delivery, to alerts over mobile phones and delivery with drones. Technology itself is a broad umbrella under which other heads can be grouped under.
It is impossible for us – even those who grew up with rotary telephones and fat picture-tube television sets – to imagine a world without smartphones, let alone mobile phones. Statista reports that the number of smartphone users in the world is likely to hit 5 Billion by 2019. That’s more than 60% of the population in a world, up from single-digit figures barely half a decade ago.
It is no wonder then that advertising has moved from visual media – such as televisions and newspapers – to electronic media such as might be consumed on handheld devices. Internet costs have dropped across the world while infrastructure for access has scaled up simultaneously, which has also had a chicken-and-egg effect on smartphone usage. In emerging markets like India and China, more people consume data on their smartphones than on computers.
Smartphones have also opened up an entirely new ecosystem for businesses to reach out to and stay in touch with their customers. In an interesting statistic, there are as many billion-dollar companies started in the past fifteen years as were started in the sixty before it. Almost all of the former have relied on technology to drive their business models.
But the journey, as far as we can see, has barely begun. Smartphones have apps and apps are a surer way for companies to communicate back and forth with their customers. They offer fewer distractions, better engagements and mine more powerful usage data than any other medium can. As we move from 4G to 5G, even bandwidth will cease to be a constraint when it comes to connectivity.
Industry 4.0 will continue to be driven by connectivity solutions that don’t stop at smartphones. Sensors too have become more accurate and efficient. They consume less power, can detect greater ranges and can often be connected wirelessly to other devices to form a network that can monitor everything from one end of the business to the other. The air around is buzzing with a gazillion zeros and ones every second, data streams that connect hundreds of devices and control units.
Internet of Things, which is the more popular term for such an arrangement, has quickly moved from being the stuff of sci-fi (or, at the very least, top-secret) to a tangible technology we can see around us. Companies are no longer asking for proofs-of-concept, because there is no longer a need to prove that concept. Instead, they are asking us how we can help them add instrumentation and automation to their work – whether it’s production or service delivery. A project we did recently was for a company that automated legacy plants. We worked on the software that would monitor the feeds from the sensors, escalate appropriately when faults or warnings occur and deliver detailed reports on how the systems were performing.
IoT is proving particularly useful in home automation and logistics. In home automation, it is used to make living conditions more efficient – climate control, turning off fans and lights when rooms are vacated, running robotic vacuum cleaners, monitoring air quality and such. Logistics employs IoT to keep track of movement of people, product and vehicles.
Cloud services, in a way, are driving these innovations. Maintaining exclusive servers is still an expensive proposition, and the need for redundancies to fall back on makes it even more so. As a best practice, companies must opt for greater server performance than they need so that their stacks do not fail even at a time of high demand – yet, that merely means that for more than 95% of their running times, server stacks are rarely utilized in an optimal manner. That’s where cloud computing changed the rules of the game.
The cloud – a network of servers, some of them performing specific purposes – promises scalability, usage optimisation, failsafes, flexibility and near-perfect uptimes. This is often delivered with instantiation and load balancing algorithms that spread the traffic to the application so that there is very little chance of a system crash or a DDoS event. Even if one or two nodes of the cloud were to go down, neither data nor operational connectivity will be lost.
Data and Intelligence
Big data, artificial intelligence, machine learning and blockchain constitute three technologies that affect business intelligence. Companies are now able to connect data from diverse sources and make sense out of them, helping them identify correlations and behaviour patterns that might otherwise be missed. Big data – massive volumes of data – is used to identify and/or prevent fraud, price discovery, customer analysis, operational benchmarking, etc.
The first phase of big data left a lot to manual analysis, in that humans still had to make sense of the numbers and take appropriate decisions. The current and future phases, however, are augmented by machine learning. Self-learning programs or bots will be assigned the task of collating new pieces of information as they enter the system, figuring out the patterns for analysis, executing the analysis and eventually putting across an actionable summary for the managers.
But machine learning itself is not going to be restricted to a sidekick’s role for big data. ML is driving decisions in sectors such as pharma and medicine, agriculture, transportation, hospitality and education. In his book, Kranti Nation: India and the Fourth Industrial Revolution, Pranjal Sharma talks in detail about how ML is changing the landscape even on governance. For instance, a Microsoft Azure project has been employed to track the successes and consequences of a health programme in south India, with a particular brief to understand how relapses offer. It is in fact a must-read for anyone looking to understand in more detail how industry in India is gearing up for modern-day tools and challenges.
Artificial Intelligence and Machine Learning are often mistaken for each other, and indeed, there are enough reasons to suggest why this shouldn’t be a big deal. At the same time, as a technocrat myself, it would be an unforgivable sin to take that path of convenience. AI and ML aren’t the same. In a manner of speaking, ML is a subset of AI.
Artificial intelligence refers to systems that can think, adapt and decide after taking in various factors, much as we would, even in an unstructured or fuzzy context. Machine Learning, on the other hand, refers to systems that are purposefully built to learn learning by themselves, often within a broad context or situation. I know, I know, it takes a while to get it… but it is an important distinction nonetheless.
AI, therefore, is implied wherever ML is employed. In addition to this, AI finds uses in systems where there are definite boundaries to what the system must do. For instance, in hospitality, AI is used by service providers to forecast demand based on key parameters, extrapolate it based on new conditions (such as the visit of a celebrity during a festival), determine staffing and material requirements, identify competitive price points and manage dynamic pricing, earmark high-value properties for last-minute guests (who won’t mind paying a premium if there is a shortage of rooms) and even put together additional packages of add-ons and third-party services to create a better experience for the guests.
3D, Augmented reality, virtual reality and mixed realities are no longer the tools of the elite they once were, at a time when the hardware requirements for running such solutions were prohibitively expensive. Now even a reasonably-capable smartphone can run AR/VR/3D applications.
Like AI and ML, AR and VR often end up getting clubbed together. Unlike AI and ML, however, AR and VR are not subset-superset. AR essentially refers to adding digital objects, such as images, text boxes, interactive buttons, videos, etc. to a real world context. In VR, that real-world context itself is not there. VR creates a completely virtual world, one in which you can look in any direction and find yourself within. The applications they can be used for, therefore, are also different.
AR is the recommended solution when context-specific overlays are needed. For instance, a brochure can be scanned and a 3D model can be superimposed on the images (known as markers in AR) so that the viewer gets a complete 360-degree look at the product. We’ve used this for our real estate, automotive and industrial machinery clients. In the US, AR is already playing a key role in medical training and diagnostics. As an educational tool, few can match the experiential value of AR.
VR, on the other hand, can be used in situations where the viewer might need to fully immerse themselves in an experience. For our real estate clients, we have created virtual apartments that visitors can navigate – either with a joystick or by moving around an area (clear of obstacles). VR would be great for teaching students about, say, the Jurassic era, for helping with phobia therapy, for experimenting with the look and feel of a room, for experimentation that would otherwise be impossible in the real world and as a marketing gimmick. Kids and adults alike love transporting themselves.
Mixed reality is, as the name suggests, a mix of real and virtual worlds. I’m going to let a video describe this one.
A common limitation to all three is that they are very personal technologies. It is only the viewer who has control, and everything is viewed from his/her perspective. Even if you were to display what the viewer is seeing on a big screen, others will see only what the primary viewer – the one wearing the headset and/or holding the device – is seeing.
There is an alternative, though.
Holograms, once the kind of technology Star Trek (and Total Recall?) fans drooled over, is making a comeback of sorts. Until a few years ago, holograms had found only limited popularity. More of a gimmick than a true solution, they weren’t interactive and required extensive, expensive hardware setups. That’s no longer the case now. Holograms can be made interactive, and, just as importantly, they can be run on kiosks.
Irrespective of the mode chosen, the success of visualization technology eventually hinges on how good the 3D modelling is. And that brings us to a heading which really doesn’t fit in with the three already listed.
Additive Manufacturing / 3D Printing
3D printing, experts argue, could eventually lead to the re-emergence of cottage industries. While 3D printing cannot be executed on the scale of hundreds of thousands of products a month, such as would happen in an assembly-line factory, it can still increase in multiples the normal throughput for small and cottage enterprises, letting them use their economies of scale to become profitable at their levels.
3D printing allows rapid prototyping and additive manufacturing.
At the risk of putting forth an unusual term, we have already moved beyond Industry 4.0 and into 4.1, which is where the convergence of these technologies are taking place. AI, for instance, is being used in AR applications to give you real-time, real-world data (such as scanning a car on the street, which pops up an electronic brochure and a choice of looks, all of which can be applied on the car then and there, virtually of course. Want to see how a silver Jag might look instead of the black one in front of you?)
Enterprise-level applications, such as Robotic Process Automation (RPA) systems, combine multiple technologies. RPAs themselves include machine learning, character recognition, IoT, mobility, et al. and are expected to disrupt the sectors they are being introduced in. Many corporations have already invested in automations across their entire business networks; others are commissioning feasibility studies for integrations that will be high on flexibility and low on maintenance in the coming years.
Indeed, one might even say that the revolution is over, done and dusted, and what we are seeing now is the new order of things. One where innovative change is the only constant, where every incremental percent of operational efficiency must be grabbed, where investments must be made for today and tomorrow.
Over the next few weeks, we will be looking at each of these technologies in detail. Stay tuned.