You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@pulsar.apache.org by Apache Pulsar Slack <ap...@gmail.com> on 2019/05/06 09:11:02 UTC

Slack digest for #general - 2019-05-06

2019-05-05 10:03:54 UTC - Nicolas Ha: No, but I ran the sql command inside the same container
----
2019-05-05 10:04:48 UTC - Nicolas Ha: Also `show schemas in system;` works - so it may be expected? Not sure
----
2019-05-05 10:06:08 UTC - Sijie Guo: It seems the container doesn’t connect to pulsar
----
2019-05-05 20:33:01 UTC - Brian Doran: Thanks @Matteo Merli
----
2019-05-06 02:40:17 UTC - Jianfeng Qiao: For producer, the configuration is
  batchingEnabled: true
  batchingMaxPublishDelayMs: 1
  blockIfQueueFull: true
  pendingQueueSize: 10000
----
2019-05-06 02:41:31 UTC - Jianfeng Qiao: Yes, I can try pulsar-perf, thanks for the suggestion.
----
2019-05-06 08:05:30 UTC - Justin: 
----
2019-05-06 08:16:05 UTC - Sijie Guo: the exception means that bookie autorecovery sends too many requests to a bookie, which cause the bookie hit the maxReadRequests threshold.

increase the value `maxPendingReadRequestsPerThread` and decrease the value `rereplicationEntryBatchSize` to be smaller than ` maxPendingReadRequestsPerThread` in your bookkeeper.conf
----
2019-05-06 08:29:41 UTC - gfouquier: @gfouquier has joined the channel
----
2019-05-06 08:38:49 UTC - gfouquier: We recently upgrade to 2.3.1 (but i am not sure it didn't happend with previous version) and clean all existing pulsar data and since it restart we get this error at each request for metrics (which end with an error 500. Anyone seen this error before ? Maybe it happens when no data have been push yet ?
----