You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Guozhang Wang (JIRA)" <ji...@apache.org> on 2016/12/02 23:16:58 UTC

[jira] [Created] (KAFKA-4484) Set more conservative default values on RocksDB for memory usage

Guozhang Wang created KAFKA-4484:
------------------------------------

             Summary: Set more conservative default values on RocksDB for memory usage
                 Key: KAFKA-4484
                 URL: https://issues.apache.org/jira/browse/KAFKA-4484
             Project: Kafka
          Issue Type: Bug
          Components: streams
            Reporter: Guozhang Wang
            Assignee: Henry Cai


Quoting from email thread:

{code}

The block cache size defaults to a whopping 100Mb per store, and that gets
expensive
fast. I reduced it to a few megabytes. My data size is so big that I doubt
it is very effective
anyway. Now it seems more stable.

I'd say that a smaller default makes sense, especially because the failure
case is
so opaque (running all tests just fine but with a serious dataset it dies
slowly)

{code}

{code}

Before we have the a single-knob memory management feature, I'd like to propose reducing the Streams' default config values for RocksDB caching and memory block size. For example, I remember Henry has done some fine tuning on the RocksDB config for his use case:

https://github.com/HenryCaiHaiying/kafka/commit/b297f7c585f5a883ee068277e5f0f1224c347bd4
https://github.com/HenryCaiHaiying/kafka/commit/eed1726d16e528d813755a6e66b49d0bf14e8803
https://github.com/HenryCaiHaiying/kafka/commit/ccc4e25b110cd33eea47b40a2f6bf17ba0924576

We could check if some of those changes are appropriate in general and if yes change the default settings accordingly.

{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)