You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@pulsar.apache.org by Apache Pulsar Slack <ap...@gmail.com> on 2019/09/16 09:11:03 UTC

Slack digest for #general - 2019-09-16

2019-09-15 10:24:16 UTC - Tilden: Hi ,  Does deployment of Pulsar supported in OpenShift Env 3.9 ? do we have any examples of it ?
----
2019-09-15 12:26:37 UTC - wlkid: got it. thanks a lot!
----
2019-09-15 14:13:53 UTC - dong: Hey, I want to ask if the pulsar's single partition broker cache has multiple copies, or is it only in one broker. If there is only one node cache, is there a hotspot problem?
----
2019-09-15 14:16:42 UTC - dong: I want to ask if the pulsar's single partition broker cache has multiple copies on multiple brokers, or is it only in one broker. If there is only one node cache, is there a hotspot problem which include read or write?@Apache Pulsar Admin @Vladimir Shchur
----
2019-09-15 14:33:56 UTC - Matteo Merli: it’s the earliest message in the topic. At that point it’s completely independent of a subscription, so the  “acked” vs “non-acked” distinction is not correct.

The earliest message available depends mainly on the time retention configuration.
+1 : Poule
----
2019-09-15 14:37:04 UTC - Matteo Merli: The broker cache is only used as an optimization to avoid reading messages from storage nodes when the consumers are caught up with the producers.

If a broker crashes, the next broker will just deliver those messages reading from storage.
----
2019-09-15 14:53:06 UTC - dong: Is the cache of all partitions under a topic on a broker node?
----
2019-09-15 14:54:04 UTC - Matteo Merli: no, partitions are independently assigned to different brokers
----
2019-09-15 14:54:23 UTC - Matteo Merli: each broker has the cache for the partitions that is currently serving
----
2019-09-15 14:57:41 UTC - dong: Is the granularity of the cache based on partition or more granular bookies's segment?
----
2019-09-15 14:58:19 UTC - Matteo Merli: partition
----
2019-09-15 14:58:37 UTC - Matteo Merli: each partition is served by 1 single broker, at a given point in time
----
2019-09-15 14:59:37 UTC - dong: Whether the partition cache has a tilt problem, resulting in unbalanced load
----
2019-09-15 15:02:16 UTC - Matteo Merli: that depends on having the partitions be evenly assigned
----
2019-09-15 15:02:54 UTC - Matteo Merli: The mechanism of assignment is explained here: <https://pulsar.apache.org/docs/en/administration-load-balance/>
----
2019-09-15 15:05:13 UTC - dong: ok,If you design the cache according to the bookies’s segment, is there no hot issue like storing it in the apache bookeeper?
----
2019-09-15 15:09:19 UTC - Matteo Merli: these are 2 separate issues. Bookies also have their cache in memory, depending on the segments they have assigned.
----
2019-09-15 15:10:06 UTC - Matteo Merli: broker is serving the partition, so it keeps the cache of what it’s serving
----
2019-09-15 15:24:06 UTC - dong: Thank you for the link, I have read the content of it, pulsar used to solve the problem of too high load of a single partition cache, is to migrate it to a low-load broker, the original cache will be invalid during the migration process?
----
2019-09-15 15:27:15 UTC - Matteo Merli: the topic itself is briefly unavailable (~100ms) during the transition
----
2019-09-15 15:30:01 UTC - dong: Another effect is that the cache caused by the migration is invalid, so that the reading performance of the consumer at that time will be degraded.
----
2019-09-15 15:32:34 UTC - dong: Or the cache does not expire, a complete migration to the new broker?
----
2019-09-15 15:35:02 UTC - Matteo Merli: the new broker will fetch the data from storage, which has its own cache
----
2019-09-15 15:38:16 UTC - dong: got it,That is, the performance of consumption at that time may be reduced.
----
2019-09-15 15:38:54 UTC - dong: When reading in tailing
----
2019-09-15 15:42:49 UTC - Matteo Merli: it would be a brief amount of time. the system needs in any case to be able to read from storage faster than the incoming rate of data
----
2019-09-15 15:43:36 UTC - Matteo Merli: otherwise it would not be able to recover after any minor hiccups
----
2019-09-15 15:46:32 UTC - dong: the system needs in any case to be able to read from storage faster than the incoming rate of data
----
2019-09-15 15:47:22 UTC - dong: why?I understand that the data just arrived in the memory, it should be the fastest reading from the memory, not from the storage inside
----
2019-09-15 15:48:08 UTC - Matteo Merli: it’s more efficient, but the purpose of message system is to be a “substantially large buffer”
----
2019-09-15 15:48:17 UTC - Matteo Merli: meaning: larger than RAM
----
2019-09-15 15:49:10 UTC - Matteo Merli: if a consumer is down for &gt; X amount time, when it comes back it needs to be able to “catch up” and drain the accumulated backlog of data, faster than the incoming rate
----
2019-09-15 15:49:23 UTC - Matteo Merli: at that point, any cache is completely useless
----
2019-09-15 15:57:14 UTC - dong: Got it, I probably understand, thank you for your patience
----
2019-09-15 15:57:39 UTC - Matteo Merli: :+1:
----
2019-09-15 16:02:11 UTC - dong: China is now at 24 o'clock, what time are you there?
----
2019-09-15 16:06:03 UTC - Matteo Merli: 9am
----
2019-09-15 16:07:24 UTC - dong: :joy:
----
2019-09-15 16:07:53 UTC - dong: Is your company <http://stream.io|stream.io>?
----
2019-09-15 16:09:59 UTC - Matteo Merli: Yes, <http://streaml.io|streaml.io>
----
2019-09-15 16:11:32 UTC - dong: :+1:
----
2019-09-15 16:13:08 UTC - dong: Is Sijie Guo your colleague? I have read the pulsar article written by it. It is very good.
----
2019-09-15 17:32:15 UTC - Rostom: @Rostom has joined the channel
----
2019-09-15 21:15:53 UTC - GerhardM: I had the same issue when sending a message to the broker. In my case (Kubernetes +Istio with mTLS) a Destination Rule for broker with {tls: {mode:disable}} was a working solution.
----
2019-09-16 00:51:43 UTC - James OSullivan: @James OSullivan has joined the channel
----
2019-09-16 04:37:33 UTC - vikash: pulsar  backlog   is  not  clearing
----
2019-09-16 04:37:45 UTC - vikash: backlog full
----
2019-09-16 04:38:08 UTC - vikash: canot  find   anything  in  log   too
----
2019-09-16 06:12:33 UTC - dong: @Matteo Merli hi,Whether pulsar supports dynamic addition and deletion of partition
----