You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by Martin Andersson <ma...@kambi.com> on 2022/08/30 08:04:46 UTC

[Structured Streaming + Kafka] Reduced support for alternative offset management

I was looking around for some documentation regarding how checkpointing (or rather, delivery semantics) is done when consuming from kafka with structured streaming and I stumbled across this old documentation (that still somehow exists in latest versions) at https://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html#checkpoints.

This page (which I assume is from around the time of Spark 2.4?) describes that storing offsets using checkpoiting is the least reliable method and goes further into how to use kafka or an external storage to commit offsets.

It also says
If you enable Spark checkpointing, offsets will be stored in the checkpoint. (...) Furthermore, you cannot recover from a checkpoint if your application code has changed.

This all leaves me with several questions:

  1.  Is the above quote still true for Spark 3, that the checkpoint will break if you change the code? How about changing the subscribe pattern?

  2.  Why was the option to manually commit offsets asynchronously to kafka removed when it was deemed more reliable than checkpointing? Not to mention that storing offsets in kafka allows you to use all the tools offered in the kafka distribution to easily reset/rewind offsets on specific topics, which doesn't seem to be possible when using checkpoints.

  3.
From a user perspective, storing offsets in kafka offers more features. From a developer perspective, having to re-implement offset storage with checkpointing across several output systems (such as HDFS, AWS S3 and other object storages) seems like a lot of unnecessary work and re-inventing the wheel.
Is the discussion leading up to the decision to only support storing offsets with checkpointing documented anywhere, perhaps in a jira?

Thanks for your time

Re: [Structured Streaming + Kafka] Reduced support for alternative offset management

Posted by Jungtaek Lim <ka...@gmail.com>.
Please consider DStream as old school technology and migrate to Structured
Streaming. There is little effort on DStream, and the most focused one is
Spark SQL, and for streaming workloads, Structured Streaming.
For Kafka integration, the guide doc is here,
https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html

All questions still apply to Kafka integration on Structured Streaming
though. The main reason we maintain our own checkpoint is to guarantee
fault-tolerance; to provide fault-tolerant semantics, the query should be
able to replay exactly the same data from the latest successful batch. This
is not feasible and unreliable if we rely on the Kafka commit mechanism.

You can still easily construct the custom streaming query listener to
commit the progress to Kafka separately, so that you can also leverage the
ecosystem of Kafka. This project is an example:
https://github.com/HeartSaVioR/spark-sql-kafka-offset-committer

Hope this helps.

Thanks,
Jungtaek Lim (HeartSaVioR)

On Tue, Aug 30, 2022 at 5:05 PM Martin Andersson <ma...@kambi.com>
wrote:

> I was looking around for some documentation regarding how checkpointing
> (or rather, delivery semantics) is done when consuming from kafka with
> structured streaming and I stumbled across this old documentation (that
> still somehow exists in latest versions) at
> https://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html#checkpoints.
>
>
> This page (which I assume is from around the time of Spark 2.4?) describes
> that storing offsets using checkpoiting is the *least* reliable method
> and goes further into how to use kafka or an external storage to commit
> offsets.
>
> It also says
>
> If you enable Spark checkpointing, offsets will be stored in the
> checkpoint. (...) Furthermore, you cannot recover from a checkpoint if your
> application code has changed.
>
>
> This all leaves me with several questions:
>
>    1. Is the above quote still true for Spark 3, that the checkpoint will
>    break if you change the code? How about changing the subscribe pattern?
>
>    2. Why was the option to manually commit offsets asynchronously to
>    kafka removed when it was deemed more reliable than checkpointing? Not to
>    mention that storing offsets in kafka allows you to use all the tools
>    offered in the kafka distribution to easily reset/rewind offsets on
>    specific topics, which doesn't seem to be possible when using checkpoints.
>
>    3. From a user perspective, storing offsets in kafka offers more
>    features. From a developer perspective, having to re-implement offset
>    storage with checkpointing across several output systems (such as HDFS, AWS
>    S3 and other object storages) seems like a lot of unnecessary work and
>    re-inventing the wheel.
>    Is the discussion leading up to the decision to only support storing
>    offsets with checkpointing documented anywhere, perhaps in a jira?
>
> Thanks for your time
>