You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "srpn (Jira)" <ji...@apache.org> on 2020/08/07 11:05:00 UTC

[jira] [Created] (SPARK-32566) kafka consumer cache capacity is unclear

srpn created SPARK-32566:
----------------------------

             Summary: kafka consumer cache capacity is unclear
                 Key: SPARK-32566
                 URL: https://issues.apache.org/jira/browse/SPARK-32566
             Project: Spark
          Issue Type: Documentation
          Components: Structured Streaming
    Affects Versions: 3.0.0
            Reporter: srpn


The [docs|https://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html] mention
{noformat}
The cache for consumers has a default maximum size of 64.  If you expect
 to be handling more than (64 * number of executors) Kafka partitions, 
you can change this setting via spark.streaming.kafka.consumer.cache.maxCapacity{noformat}
However, the code seems to expect
{code:java}
spark.kafka.consumer.cache.capacity/spark.sql.kafkaConsumerCache.capacity{code}
Would be nice to clear the ambiguity in the documentation or even merge these configurations in the code



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org