You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Amit Menashe (Jira)" <ji...@apache.org> on 2020/09/22 08:09:00 UTC
[jira] [Created] (SPARK-32962) Spark Streaming
Amit Menashe created SPARK-32962:
------------------------------------
Summary: Spark Streaming
Key: SPARK-32962
URL: https://issues.apache.org/jira/browse/SPARK-32962
Project: Spark
Issue Type: Bug
Components: DStreams
Affects Versions: 2.4.5
Reporter: Amit Menashe
Hey there,
I'm using this spark streaming job which integrated with Kafka (and manage its offsets commitions at Kafka itself),
The problem is when I have a failure I want to repeat the work on those offset ranges (that something went wrong with them) , therefore I catch the exception and NOT commit (with commitAsync) this range.
However I notice it keeps proceeding (without any commit made).
moreover I removed later all the commitAsync calls and I the stream keep proceeding!
I guess there might be any inner cache or something that helps the streaming job to consume the entries from Kafka.
Could you please advice?
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org