You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Matthias J. Sax (JIRA)" <ji...@apache.org> on 2017/12/23 20:37:02 UTC

[jira] [Created] (KAFKA-6400) Consider setting default cache size to zero in Kafka Streams

Matthias J. Sax created KAFKA-6400:
--------------------------------------

             Summary: Consider setting default cache size to zero in Kafka Streams
                 Key: KAFKA-6400
                 URL: https://issues.apache.org/jira/browse/KAFKA-6400
             Project: Kafka
          Issue Type: Improvement
          Components: streams
    Affects Versions: 1.0.0
            Reporter: Matthias J. Sax
            Priority: Minor


Since the introduction of record caching in Kafka Streams DSL, we see regular reports/questions of first times users about "Kafka Streams does not emit anything" or "Kafka Streams loses messages". Those report are subject to record caching but no bugs and indicate bad user experience.

We might consider setting the default cache size to zero to avoid those issues and improve the experience for first time users. This hold especially for simple word-count-demos (Note, many people don't copy out example word-count but build their own first demo app.)

Remark: before we had caching, many users got confused about our update semantics and that we emit an output record for each input record for windowed aggregation (ie, please give me the "final" result"). Thus, we need to consider this and judge with care to not go "forth and back" with default user experience -- we did have less questions about this behavior lately.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)