Share This Post

Home >> Communities >> Emerging tech >> Big Data Analytics

Why It’s Important for Edge Computing to be Open Source

The lid has been blown off the data center, and edge computing is poised to become the dominant paradigm.  Consisting of the robots on a factory floor, small servers that are hundreds of feet in the air inside wind turbines, blast-proof servers in an oil field, and many other devices everywhere, edge computing is set to overshadow both cloud and the datacenter as the dominant data driver.  According to one analyst firm, 45 percent of IoT-created data will be stored, processed and acted upon close to or at the edge of the network in 2019.  Another analyst predicts that 75 percent of enterprise-generated data will be created and processed at the edge by 2022.  For the next 5 years, ARC Advisory Group forecasts almost 25% annual growth for edge software platforms.

While major vendors may see this as an opportunity to create a massive, global proprietary edge infrastructure that will lock their customers into their solutions for another 30 years, it’s simply not going to happen.  At the edge, every environment is different driving massive diversity in hardware, software, network, and use cases for a single approach to be successful.  To achieve maximum business impact, the edge will run on the same open-source building blocks that powered the cloud before it.  For operational, logistical, and security reasons, the edge can’t succeed any other way.  Here’s why.

1. We haven’t seen scale and diversity like the edge before

The edge is populated with vertical-specific solutions designed to fit a particular requirement.  This specificity, combined with the sheer range of use cases and the specific hardware, software, and networking needs of each, creates a highly diverse environment that is difficult to manage.  Plus, some of these applications have been in use for decades – running on outdated equipment that hasn’t been modernized.  And then the sheer scale of that diversity – with millions of small servers running specific applications across factory floors, attached to industrial equipment, installed inside every wind turbine – creates a situation that quickly magnifies in complexity.

One of the biggest challenges that comes from this heterogeneity is how to ensure compatibility across so many different hardware and software types.  Open source organizations, like the Linux Foundation, provide the legal and collaboration infrastructure that makes it possible for many different organizations to coordinate efforts and develop agreed-upon architectures.  These organizations play the role of creating de-facto standards that can be adopted by the entire industry, just like organizations, such as IEEE and Internet Engineering Task Force (IETF), have played over the last 25+ years in the growth of the Internet itself.

2. A single security approach doesn’t fit all

 Just as the automotive industry, factory manufacturing, and retail businesses all had different machine requirements in the Industrial Revolution, industry edge applications have their own unique needs.  You can’t change diversity at the edge, so it must be made strong through open source.

The words of Linux creator Linus Torvalds were never more accurate: “Given enough eyeballs, all bugs are shallow.”  Just as it has in the cloud and other applications, open source provides for the deep and broad examination by hundreds (or even thousands) of developer eyes.  The more people who can view and test code, the greater chance that flaws will be caught and corrected.  What’s more, in open source software, imperfections are typically discovered more quickly and fixed almost immediately.

Proprietary security works on proprietary code, and if you can’t write a proprietary system for the entire edge, and data centers won’t work, you end up with million-point solutions that may or may not cooperate.  The last thing the edge needs is more complications.

3. The edge has no perimeter

What does it mean for security when apps are cyber-physical, incorporated into mobile objects, such as robots and scooters, and operate in the wild without a human anywhere in sight?  For one, nobody must lock themselves into a walled, secure warehouse to service equipment.  It also means that updates and passwords can easily be intercepted through everything from spoofed IP addresses to dongles – and millions of devices at thousands of locations must be updated at once.

In this brave new world, “zero-trust” security is required.  “Zero-trust” means the most extreme identity verification processes for every person, piece of software, and device trying to access resources on a network, regardless of where they are trying to gain that access.  And because the edge is such a diverse environment – from homes to headquarters to factories – “zero-trust” can only work in an standards-based, open-source world.

4. Infinite use cases

With so many different deployments of edge computing (IoT devices for home and manufacturing, wearable technology, and more), flexibility and customizability are extremely important.  Every business, device or industry engaging edge computing applications have their own unique requirements.  For example, industry requires IoT compatibility with a wide variety of legacy applications, electric scooters must communicate with multiple mobile operating systems, etc.

Open source enables these various uses and permits enterprises to customize the software to their individual needs.  Plus, organizations can contribute their code and use cases back into the project, enabling future innovation for others.  What’s more, open source enables businesses to change and adapt at their own pace, without relying on proprietary software upgrade timelines.

5. Flexibility means innovation

Open source creates nearly unlimited possibilities for improvements, enhancements, and features for computing on the edge.  In closed, proprietary systems, users (and businesses) are stuck with whatever the single provider determines is best.  Open source completely rewrites that book and lets business and industry use common code to improve upon the quality of their products, creating more opportunities to innovate.

The requirements of the edge need a new solution that builds on existing cloud concepts while providing a consistent developer experience.  The right approach must support diverse edge deployment on any app, hardware device or network.  It must enable zero-trust security, replacing perimeter-based physical and cybersecurity, while enabling central control, automation and autonomous edge operation.

Edge Computing  Linux%20Foundation%20Project%20EVE.JPG

And finally, developers must be able to take any code, package it and run it at the edge.  These edge containers possess enhancements to container technology so that they can run both legacy applications that require older operating systems, as well as modern, cloud-native applications while being location-aware.  Aligned with the specifications of the Open Container Initiative, but optimized for the edge, edge containers can run entire operating systems, including Windows-based ones.  This flexibility will not only simplify edge application creation but will make the edge truly open, providing an infrastructure that supports boundless innovation.

Open source software foundations have moved well beyond the common perception of “free software” and have become the new champions of standards in the software-defined world of edge computing.  These standards enable organizations to avoid interoperability issues within their diverse infrastructures and propelling their continued growth, innovation, and customer acceptance.

About the Author

Said Ouissal is the Founder and CEO of ZEDEDA, Inc., a company seeks to “extend the cloud so the world can build cloud-native apps outside the datacenter”.  ZEDEDA is also the main contributor to the Project EVE “Edge Virtualization Engine” of the Linux Foundation.

Share This Post

Leave a Reply