You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2018/12/19 07:35:00 UTC

[jira] [Resolved] (SPARK-26391) Spark Streaming Kafka with Offset Gaps

     [ https://issues.apache.org/jira/browse/SPARK-26391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon resolved SPARK-26391.
----------------------------------
    Resolution: Invalid

Questions should go to mailing list. You could have a better answer from developers and users.

> Spark Streaming Kafka with Offset Gaps
> --------------------------------------
>
>                 Key: SPARK-26391
>                 URL: https://issues.apache.org/jira/browse/SPARK-26391
>             Project: Spark
>          Issue Type: Question
>          Components: Spark Core, Structured Streaming
>    Affects Versions: 2.4.0
>            Reporter: Rishabh
>            Priority: Major
>
> I have an app that uses Kafka Streaming to pull data from `input` topic and push to `output` topic with `processing.guarantee=exactly_once`. Due to `exactly_once` gaps (transaction markers) are created in Kafka. Let's call this app `kafka-streamer`.
> Now I've another app that listens to this output topic (actually they are multiple topics with a Pattern/Regex) and processes the data using [https://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html]. Let's call this app `spark-streamer`.
> Due to the gaps, the first thing that happens is spark streaming fails. To fix this I enabled `spark.streaming.kafka.allowNonConsecutiveOffsets=true` in the spark config before creating the StreamingContext. Now let's look at the issues that were faced when I start `spark-streamer`:
>  # Even though there are new offsets to be polled/consumed, it requires another message push to the topic partition to be able to start processing. If I start the app (and there are messages in queue to be polled) and don't push any topic, the code will timeout after default 120ms and throw an exception.
>  # It doesn't fetch the last record. It fetches the record till second-last. This means to poll/process the last record, another message has to be pushed. This is a problem for us since `spark-streamer` is listening to multiple topics (based on a pattern) and there might be a topic where throughput is low but the data should still make it to Spark for processing.
>  # In general if no data/message is pushed then it'll die after 120ms default timeout for polling.
> Now in the limited amount of time I had, I tried going through the spark-streaming-kafka code and was only able to find an answer to the third problem which is this - [https://github.com/apache/spark/blob/master/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaDataConsumer.scala#L178]
> My questions are:
>  # Why do we throw an exception in `compactedNext()` if no data is polled ?
>  # I wasn't able to figure out why the first and second issue happened, would be great if somebody can point out a solution or reason behind the behaviour ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org