You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@pulsar.apache.org by Apache Pulsar Slack <ap...@gmail.com> on 2019/09/28 09:11:03 UTC

Slack digest for #general - 2019-09-28

2019-09-27 10:19:19 UTC - 343355247: @343355247 has joined the channel
----
2019-09-27 10:31:24 UTC - Jack: @Jack has joined the channel
----
2019-09-27 13:40:17 UTC - Jesse Zhang (Bose): @Matteo Merli in my test, the effective value of `NackRedeliveryDelay` is about 1/4-1/3 of the specified value. Is this an issue?
----
2019-09-27 18:29:32 UTC - Karthik Ramasamy: yes, you need O’Reilly signup
----
2019-09-27 21:47:13 UTC - Alex Mault: Hi all! Looking to get a pulsar deployment into k8s production. Trying to narrow down what the resource requirements are reasonable for prod pulsar. The example has 15GB (!!) per container, which seems... high?
----
2019-09-27 21:55:02 UTC - Ali Ahmed: @Alex Mault Do you an estimate of your traffic ?
----
2019-09-27 21:55:32 UTC - Alex Mault: 100k / msg day (pretty much nothing.. just a minimal cluster for now)
----
2019-09-27 21:57:21 UTC - Ali Ahmed: I would try a 2 broker 3 bookie config with 8gb ram for each.
----
2019-09-27 21:58:00 UTC - Alex Mault: 40Gb total? or 8gb for brokers, 8gb for the bookies?
----
2019-09-27 22:01:43 UTC - Matteo Merli: At that rate 1GB each for memory should be more than enough
----
2019-09-27 22:02:47 UTC - Alex Mault: yea, that's more in line with my thinking. Just being sure to adjust the memory `-Xmx512M` arg properly.
----
2019-09-27 22:04:50 UTC - Alex Mault: FYI @Matteo Merli (related to above) I've got another PR coming your way. This time for the helm `values-mini.yaml` that is in the example helm deployment. I've seen several people here complain that their pods are getting OOM'd after helm deployment - looks like it is because the application is configured to use `-Xmx128m` but then the pod only requests ```  
resources:
    requests:
      memory: 64Mi
```
----
2019-09-27 22:05:06 UTC - Alex Mault: thus, when the memory usage creeps up, k8s will kill / evict the pod.
----
2019-09-27 22:05:28 UTC - Matteo Merli: yes
----
2019-09-27 22:08:13 UTC - Alex Mault: oops, got that wrong - it's a request - not a limit...
----
2019-09-27 23:46:05 UTC - Addison Higham: spent a few hours trying to get function state working:
```
23:11:55.591 [client-scheduler-OrderedScheduler-0-0] INFO  org.apache.bookkeeper.clients.impl.container.StorageContainerChannel - Failed to fetch info of storage container (0) - 'StorageContainerError : StatusCode = INTERNAL_SERVER_ERROR, Error = fail to fetch location for storage container (0)'. Retry in 200 ms ...
```
is where I ended up. Also.. somewhere during turning it on, I got one of my bookies segfaulting the jvm in rocksdb code in a loop. Turning the `StreamStorageLifecycleComponent` back off and restarting the bookie resulted in one more segfault but then it recovered
----
2019-09-27 23:47:10 UTC - Addison Higham: I am wondering if there is some bad metadata either in ZK or on disk, but I can't track it down...  if anyone has any ideas of where to go next... that would be useful
----