You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Gabor Somogyi (JIRA)" <ji...@apache.org> on 2019/03/04 09:21:00 UTC

[jira] [Created] (SPARK-27042) Query fails if task is failing due to corrupt cached Kafka producer

Gabor Somogyi created SPARK-27042:
-------------------------------------

             Summary: Query fails if task is failing due to corrupt cached Kafka producer
                 Key: SPARK-27042
                 URL: https://issues.apache.org/jira/browse/SPARK-27042
             Project: Spark
          Issue Type: Bug
          Components: Structured Streaming
    Affects Versions: 2.4.0, 2.3.3, 2.2.3, 2.1.3, 3.0.0
            Reporter: Gabor Somogyi


If a task is failing due to a cached Kafka producer and the task is retied in the same executor then the task is getting the same KafkaProducer over and over again unless it's invalidated with the timeout configured by "spark.kafka.producer.cache.timeout" which is not really probable. After several retries the query stops.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org