You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Takeshi Yamamuro (Jira)" <ji...@apache.org> on 2020/08/07 12:25:00 UTC

[jira] [Commented] (SPARK-32566) kafka consumer cache capacity is unclear

    [ https://issues.apache.org/jira/browse/SPARK-32566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173107#comment-17173107 ] 

Takeshi Yamamuro commented on SPARK-32566:
------------------------------------------

cc: [~kabhwan]

> kafka consumer cache capacity is unclear
> ----------------------------------------
>
>                 Key: SPARK-32566
>                 URL: https://issues.apache.org/jira/browse/SPARK-32566
>             Project: Spark
>          Issue Type: Documentation
>          Components: Structured Streaming
>    Affects Versions: 3.0.0
>            Reporter: srpn
>            Priority: Major
>
> The [docs|https://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html] mention
> {noformat}
> The cache for consumers has a default maximum size of 64.  If you expect
>  to be handling more than (64 * number of executors) Kafka partitions, 
> you can change this setting via spark.streaming.kafka.consumer.cache.maxCapacity{noformat}
> However, for structured streaming, the code seems to expect
> {code:java}
> spark.kafka.consumer.cache.capacity/spark.sql.kafkaConsumerCache.capacity{code}
> Would be nice to clear the ambiguity in the documentation or even merge these configurations in the code



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org