Topics In Demand
Notification
New

No notification found.

Blog
DStreams vs. DataFrames: Two Flavors of Spark Streaming

April 28, 2020

977

0

This post is a guest publication written by Yaroslav Tkachenko, a Software Architect at Activision.

Apache Spark is one of the most popular and powerful large-scale data processing frameworks. It was created as an alternative to Hadoop’s MapReduce framework for batch workloads, but now it also supports SQL, machine learning, and stream processing. Today I want to focus on Spark Streaming and show a few options available for stream processing.

Stream data processing is used when dynamic data is generated continuously, and it is often found in big data use cases. In most instances data is processed in near-real time, one record at a time, and the insights derived from the data are also used to provide alerts, render dashboards, and feed machine learning models that can react quickly to new trends within the data.

DStreams Vs. DataFrames

Spark Streaming went alpha with Spark 0.7.0. It’s based on the idea of discretized streams or DStreams. Each DStream is represented as a sequence of RDDs, so it’s easy to use if you’re coming from low-level RDD-backed batch workloads. DStreams underwent a lot of improvements over that period of time, but there were still various challenges, primarily because it’s a very low-level API.

As a solution to those challenges, Spark Structured Streaming was introduced in Spark 2.0 (and became stable in 2.2) as an extension built on top of Spark SQL. Because of that, it takes advantage of Spark SQL code and memory optimizations. Structured Streaming also gives very powerful abstractions like Dataset/DataFrame APIs as well as SQL. No more dealing with RDD directly!

Both Structured Streaming and Streaming with DStreams use micro-batching. The biggest difference is latency and message delivery guarantees: Structured Streaming offers exactly-once delivery with 100+ milliseconds latency, whereas the Streaming with DStreams approach only guarantees at-least-once delivery, but can provide millisecond latencies.

I personally prefer Spark Structured Streaming for simple use cases, but Spark Streaming with DStreams is really good for more complicated topologies because of its flexibility. That’s why below I want to show how to use Streaming with DStreams and Streaming with DataFrames (which is typically used with Spark Structured Streaming) for consuming and processing data from Apache Kafka. I’m going to use Scala, Apache Spark 2.3, and Apache Kafka 2.0.

Also, for the sake of example I will run my jobs using Apache Zeppelin notebooks provided by Qubole. Qubole is a data platform that I use daily. It manages Hadoop and Spark clusters, makes it easy to run ad hoc Hive and Presto queries, and also provides managed Zeppelin notebooks that I happily use. With Qubole I don’t need to think much about configuring and tuning Spark and Zeppelin, it’s just handled for me.

The actual use case I have is very straightforward:

  • Some sort of telemetry is written to Kafka: small JSON messages with metadata and arbitrary key/value pairs
  • I want to connect to Kafka, consume, and deserialize those messages
  • Then apply transformations if needed
  • Collect some aggregations
  • Finally, I’m interested in anomalies and generally bad data — since I don’t control the producer, I want to catch things like NULLs, empty strings, maybe incorrect dates and other values with specific formats, etc.
  • The job should run for some time, then automatically terminate. Typically, Spark Streaming jobs run continuously, but sometimes it might be useful to run it ad hoc for analysis/debugging (or as an example in my case, since it’s so easy to run a Spark job in a notebook).

Streaming With DStreams

In this approach we use DStreams, which is simply a collection of RDDs.

Streaming With DataFrames

Now we can try to combine Streaming with DataFrames API to get the best of both worlds!

Conclusion

Which approach is better? Since DStream is just a collection of RDDs, it’s typically used for low-level transformations and processing. Adding a DataFrames API on top of that provides very powerful abstractions like SQL, but requires a bit more configuration. And if you have a simple use case, Spark Structured Streaming might be a better solution in general!


That the contents of third-party articles/blogs published here on the website, and the interpretation of all information in the article/blogs such as data, maps, numbers, opinions etc. displayed in the article/blogs and views or the opinions expressed within the content are solely of the author's; and do not reflect the opinions and beliefs of NASSCOM or its affiliates in any manner. NASSCOM does not take any liability w.r.t. content in any manner and will not be liable in any manner whatsoever for any kind of liability arising out of any act, error or omission. The contents of third-party article/blogs published, are provided solely as convenience; and the presence of these articles/blogs should not, under any circumstances, be considered as an endorsement of the contents by NASSCOM in any manner; and if you chose to access these articles/blogs , you do so at your own risk.


QuboleTechnologies

© Copyright nasscom. All Rights Reserved.