You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (Jira)" <ji...@apache.org> on 2020/09/24 07:44:00 UTC
[jira] [Resolved] (SPARK-32962) Spark Streaming
[ https://issues.apache.org/jira/browse/SPARK-32962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hyukjin Kwon resolved SPARK-32962.
----------------------------------
Resolution: Invalid
Looks more like a question. Let's ask it to the mailing list to get some advice before filing it as an issue. See also https://spark.apache.org/community.html
> Spark Streaming
> ---------------
>
> Key: SPARK-32962
> URL: https://issues.apache.org/jira/browse/SPARK-32962
> Project: Spark
> Issue Type: Bug
> Components: DStreams
> Affects Versions: 2.4.5
> Reporter: Amit Menashe
> Priority: Trivial
>
> Hey there,
> I'm using this spark streaming job which integrated with Kafka (and manage its offsets commitions at Kafka itself),
> The problem is when I have a failure I want to repeat the work on those offset ranges (that something went wrong with them) , therefore I catch the exception and NOT commit (with commitAsync) this range.
> However I notice it keeps proceeding (without any commit made).
> moreover I removed later all the commitAsync calls and I the stream keep proceeding!
> I guess there might be any inner cache or something that helps the streaming job to consume the entries from Kafka.
>
> Could you please advice?
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org