You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@pulsar.apache.org by Apache Pulsar Slack <ap...@gmail.com> on 2019/07/25 09:11:02 UTC

Slack digest for #dev - 2019-07-25

2019-07-25 06:19:44 UTC - Kenta Kusumoto: Hi. We have a problem that when there is a large backlog of messages (8 million messages) the consumer is not able to consume the backlog and the backlog continues growing. There is no problem when there is no backlog but the problem arises if the consumer has been shutdown for a while and it is restarted (deleting the backlog manually is not an option). We tried Pulsar 2.4.0 because there is a backlog related bug fix but it didn’t help. We are running 3 bookies in Azure VMs. Does anyone have any pointers how to handle the problem?
----
2019-07-25 06:25:42 UTC - Sijie Guo: @Kenta Kusumoto :

If you have a huge backlog, consider increasing `dbStorage_rocksDB_blockCacheSize` in conf/bookkeeper.conf to a large value like 1~2 GB, and restart your bookies. I doubt you are not able to consume it fast it enough becaus e the index can not be fully loaded in memory hence there are a lot of index page swap in-and-out.
----
2019-07-25 06:31:19 UTC - Kenta Kusumoto: Thanks, I will try this! Currently I have `dbStorage_rocksDB_blockCacheSize=268435456`. So, is this the default size of 256MB?
----
2019-07-25 06:35:54 UTC - Sijie Guo: yes
+1 : Kenta Kusumoto
----
2019-07-25 08:46:14 UTC - Kenta Kusumoto: I increased to 2GB and tried with 4 million messages backlog. It looks like it is working ok, thanks a lot!
----
2019-07-25 09:04:02 UTC - Sijie Guo: awesome!
----