Topics In Demand
Notification
New

No notification found.

Blog
What if Computational Storage never existed?

December 1, 2020

314

0


Listen to this article



The ever mounting datasphere calls for a continuous evolution of the way we store and process data. IDC recently released a report that predicts the collective sum of the world’s data. It states that the data will grow from 33 zettabytes (ZB) this year to 175 ZB by 2025, at a compounded annual growth rate of 61 percent.

IDC also predicts that by 2024, the amount of data stored in the core will be more than twice the amount stored in the endpoint, “completely overturning the dynamic from 2015” as the core becomes the repository of choice. IDC also predicts that the average person will have nearly 5,000 digital interactions per day by 2025, up from an average of 700 to 800 digital interactions today.

Today, companies are deploying their data-intensive workloads over edge computing that brings IT resources closer to the source of data generation. But edge has several constraints – IT teams struggle to keep up with the storage and compute requirements and experience I/O bottlenecks by inducing application latencies. Along with these, the edge model has multiple other restrictions such as investment cost, maintenance, and more.

This is where computational storage comes to the rescue. It does in-situ data processing, shifting some of the operations inside the storage, where data can be processed faster with less movement and extensive parallel processing, resulting in faster and real-time results.

But what if computational storage never existed?

Computational storage was not born from the ashes of any other storage architecture, but it has been co-existing alongside other architectures and is proving to be a more efficient solution. Let’s understand the limitations of edge computing that computational storage is helping to overcome and what would have been our loss if it was never invented.

Edge computing is bringing data and applications closer, while also reducing network traffic, streamlining operations, and improving the performance of crucial workloads. Regardless of the multiple benefits offered by edge computing, it comes with its own set of limitations that would otherwise have not been addressed if computational storage didn’t exist.

  1. One of the biggest limitations is the compute, storage, and resource requirements. There is always a mismatch between storage capacity and the compute for processing and real-time analysis of the data. – This problem is fixed by computational storage with parallel processing resulting in faster and real-time processing and analysis.
  2. Along with the space constraint, edge also experiences constraints due to the unavailability of high-power compute resources. Tight IT budgets and critical workloads make it challenging to manage this issue; computational storage also lowers the power consumption as it uses CPUs for distributed processing of the parallelized storage.
  3. Traditional compute and storage essentially have lower bandwidth resulting in I/O bottleneck as the increasing data is moved between storage and memory, and the supporting technologies are not efficient. As computational storage uses extensive parallel processing and most of the processing happens inside the storage system, it prevents unnecessary data movement, resulting in faster processing, lower latencies, and increased efficiency.

Without computational storage, we would have experienced excessive data movement between computing and storage resources, increasing latencies and making our dream of real-time data processing a mere dream.

Do you think the young and woke computational storage architecture is the answer for the data-intensive, latency-sensitive workloads running in an edge environment? Tell us about the challenges you are facing in implementing computation storage in your organization. Share your views in the comments section below.

Sources: Tech Target | IDC | Gartner | SNIA


That the contents of third-party articles/blogs published here on the website, and the interpretation of all information in the article/blogs such as data, maps, numbers, opinions etc. displayed in the article/blogs and views or the opinions expressed within the content are solely of the author's; and do not reflect the opinions and beliefs of NASSCOM or its affiliates in any manner. NASSCOM does not take any liability w.r.t. content in any manner and will not be liable in any manner whatsoever for any kind of liability arising out of any act, error or omission. The contents of third-party article/blogs published, are provided solely as convenience; and the presence of these articles/blogs should not, under any circumstances, be considered as an endorsement of the contents by NASSCOM in any manner; and if you chose to access these articles/blogs , you do so at your own risk.


Calsoft is ISV preferred product engineering services partner in Storage, Networking, Virtualization, Cloud, IoT and analytics domains.

© Copyright nasscom. All Rights Reserved.