You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@pulsar.apache.org by Apache Pulsar Slack <ap...@gmail.com> on 2018/12/05 09:11:02 UTC

Slack digest for #general - 2018-12-05

2018-12-04 11:23:39 UTC - Paul van der Linden: @Paul van der Linden has joined the channel
----
2018-12-04 11:27:42 UTC - Paul van der Linden: I keep getting `10:49:02.893 [main] ERROR org.apache.bookkeeper.bookie.Bookie - There are directories without a cookie, and this is neither a new environment, nor is storage expansion enabled. Empty directories are [data/bookkeeper/journal/current, data/bookkeeper/ledgers/current]` if I spin up more then one bookie, I'm using a statefulset from kubernetes, have set `"useHostNameAsBookieID" : "true"`
----
2018-12-04 11:27:56 UTC - Paul van der Linden: this is with a clean disk and clean zookeeper
----
2018-12-04 11:28:08 UTC - Paul van der Linden: is there any other settings which I need to change to allow me to spinup multiple bookies?
----
2018-12-04 11:36:25 UTC - Sijie Guo: Did you ever scale up and down the stateful set before? This error means there is a cookie in the metadata store, but there are not cookie files found in your disks.
----
2018-12-04 11:37:03 UTC - Sijie Guo: Or did you ever remove your persistent volume and remount it?
----
2018-12-04 11:42:47 UTC - Paul van der Linden: I have destroyed all storage several times
----
2018-12-04 11:43:02 UTC - Paul van der Linden: so basically from scratch
----
2018-12-04 11:43:46 UTC - Paul van der Linden: in the metadata store means zookeeper I guess?
----
2018-12-04 11:47:26 UTC - Paul van der Linden: (this is in a test cluster)
----
2018-12-04 11:47:42 UTC - Paul van der Linden: You can't scale the bookie cluster?
----
2018-12-04 12:00:43 UTC - Ivan Kelly: you should be able to. but what that error is saying is that there has been another bookie with the same hostname before, and but the data that should be on the bookie is not there
----
2018-12-04 12:00:58 UTC - Ivan Kelly: you haven't wiped out your zookeeper I assume?
----
2018-12-04 12:14:59 UTC - Paul van der Linden: I have wiped out everything, just deleted all pvc, pv's, pods, deployments, services, statefulsets
----
2018-12-04 12:16:32 UTC - Paul van der Linden: I have wiped everything again
----
2018-12-04 12:16:42 UTC - Paul van der Linden: now I get a different error, so maybe the wipe is not complete
----
2018-12-04 12:17:01 UTC - Paul van der Linden: `org.apache.bookkeeper.bookie.BookieException$InvalidCookieException: instanceId 9c75ae56-65e1-4c0b-8bf4-8c6e3a315dcd is not matching with 20000887-a4c7-49fe-9096-7ae8fe3d8859`
----
2018-12-04 12:18:37 UTC - Paul van der Linden: but not sure what else to wipe out after stopping all instances and all persistent data
----
2018-12-04 12:24:23 UTC - Paul van der Linden: I'm adjusting the "general" example from the repository to get a scalable cluster
----
2018-12-04 12:24:39 UTC - Paul van der Linden: but not sure yet how to get it functional
----
2018-12-04 12:27:44 UTC - Paul van der Linden: I cleared out everything again and added PVC's to zookeeper
----
2018-12-04 12:31:39 UTC - Paul van der Linden: back to the old error
----
2018-12-04 12:32:10 UTC - Paul van der Linden: the bookies keep complaining about the metadata on a new cluster with new storage, clean zookeeper, good bookieID
----
2018-12-04 12:54:52 UTC - Paul van der Linden: mmh
----
2018-12-04 12:55:05 UTC - Paul van der Linden: it's also running a "special" zookeeper install?
----
2018-12-04 13:16:22 UTC - Ivan Kelly: oh, i think this is a known problem. are you using helm charts?
----
2018-12-04 13:20:47 UTC - Paul van der Linden: I'm using kubernetes manifests
----
2018-12-04 13:20:57 UTC - Paul van der Linden: just manual ones atm
----
2018-12-04 13:21:23 UTC - Paul van der Linden: I used the generic one from <https://github.com/apache/pulsar/tree/master/deployment/kubernetes>
----
2018-12-04 13:21:43 UTC - Paul van der Linden: then adjusted it a bit to have more then one bookie
----
2018-12-04 13:21:53 UTC - Paul van der Linden: but I looked at the GKE one
----
2018-12-04 13:22:12 UTC - Paul van der Linden: which looks like it's mostly PVC's and the setting to get the bookie ID from the hostname
----
2018-12-04 13:23:45 UTC - Ivan Kelly: ok, I remember there was an ordering issue with these. that the init container should have completed before running the bookies
----
2018-12-04 13:24:14 UTC - Ivan Kelly: i know it's non-ideal, but could you try with sleep 10 at the start of the bookie commands?
----
2018-12-04 13:24:49 UTC - Paul van der Linden: I will try that
----
2018-12-04 13:25:24 UTC - Paul van der Linden: <https://github.com/apache/pulsar/blob/master/deployment/kubernetes/generic/bookie.yaml#L64>
----
2018-12-04 13:25:30 UTC - Paul van der Linden: basicaly add a sleep 10 there?
----
2018-12-04 13:25:40 UTC - Ivan Kelly: yes
----
2018-12-04 13:26:28 UTC - Paul van der Linden: ok, will do that
----
2018-12-04 13:27:02 UTC - Ivan Kelly: oh, wait
----
2018-12-04 13:27:35 UTC - Ivan Kelly: no, maybe remove the initContainers from bookie.yaml too
----
2018-12-04 13:28:34 UTC - Paul van der Linden: ok
----
2018-12-04 13:29:14 UTC - Ivan Kelly: that initContainers shouldn't be there at all
----
2018-12-04 13:29:20 UTC - Paul van der Linden: mmh ok
----
2018-12-04 13:29:52 UTC - Paul van der Linden: just delete the init container or also the 10 second sleep?
----
2018-12-04 13:30:00 UTC - Ivan Kelly: add the sleep
----
2018-12-04 13:30:01 UTC - Paul van der Linden: (add the 10 second)
----
2018-12-04 13:30:09 UTC - Ivan Kelly: and delete, like do both
----
2018-12-04 13:30:33 UTC - Ivan Kelly: basically, theres a job that initilizes the metadata for the whole cluster
----
2018-12-04 13:30:45 UTC - Ivan Kelly: that initContainers would overwriting that
----
2018-12-04 13:31:15 UTC - Ivan Kelly: there's a better way to do it apart from the sleep, but i would need to test it
----
2018-12-04 13:31:36 UTC - Paul van der Linden: I'm waiting for everything to get deleted again
----
2018-12-04 13:32:08 UTC - Ivan Kelly: I should really get an k8s up and running
----
2018-12-04 13:32:08 UTC - Paul van der Linden: Spinning up new instances
----
2018-12-04 13:33:00 UTC - Paul van der Linden: I'm currently just testing with minikube, before moving it into GKE for a more extensive test
----
2018-12-04 13:33:23 UTC - Ivan Kelly: sure
----
2018-12-04 13:34:17 UTC - Paul van der Linden: Now it's failing with invalid cookie again
----
2018-12-04 13:34:21 UTC - Paul van der Linden: oh wait
----
2018-12-04 13:34:27 UTC - Paul van der Linden: no this is the same error again
----
2018-12-04 13:34:34 UTC - Paul van der Linden: `- There are directories without a cookie, and this is neither a new environment, nor is storage expansion enabled. Empty directories are [data/bookkeeper/journal/current, data/bookkeeper/ledgers/current]`
----
2018-12-04 13:34:46 UTC - Paul van der Linden: now it's just on bookie-0 instead of bookie-1
----
2018-12-04 13:34:54 UTC - Ivan Kelly: ok, one sec, I'll get minikube up
----
2018-12-04 13:39:23 UTC - Paul van der Linden: What I have done so far:
- add PV &amp; PVC to zookeeper (to store on the host, and cleared those between tries)
- make bookie a statefulset instead of a daemonset
- add `useHostNameAsBookieID: "true"` to the bookie configmap
----
2018-12-04 13:39:40 UTC - Paul van der Linden: oh and stripped out prometheus and grafana as I had my own instances
----
2018-12-04 13:41:41 UTC - Paul van der Linden: I can zip up the whole thing of manifests
----
2018-12-04 13:43:09 UTC - Ivan Kelly: ya, send them on
----
2018-12-04 13:43:29 UTC - Paul van der Linden: ah and it's own namespace as well
----
2018-12-04 13:44:48 UTC - Paul van der Linden: 
----
2018-12-04 13:53:15 UTC - Ivan Kelly: what sequence of commands are you using to boot the cluster?
----
2018-12-04 13:56:41 UTC - Paul van der Linden: i'm just doing a `kubectl apply -f k8s/` atm
----
2018-12-04 13:56:56 UTC - Paul van der Linden: and the 00 and 01 prefix makes sure zookeeper boots first
----
2018-12-04 13:57:01 UTC - Ivan Kelly: ah
----
2018-12-04 13:59:51 UTC - Paul van der Linden: I forgot that it explicitly asks the order and wait times in the docs
----
2018-12-04 13:59:59 UTC - Paul van der Linden: but doesn't that make it really brittle?
----
2018-12-04 14:00:29 UTC - Paul van der Linden: that means if there is a problem you need to manually stop everything and gradually start component by component
----
2018-12-04 14:01:01 UTC - Ivan Kelly: it should be more robust, yes, the bookies should wait for metadata to have been set up before doing anything themselves
----
2018-12-04 14:01:22 UTC - Ivan Kelly: try renaming cluster-metadata.yaml to 03-cluster-metadata.yaml
----
2018-12-04 14:02:10 UTC - Paul van der Linden: I've just deleted all data/instances again
----
2018-12-04 14:02:21 UTC - Paul van der Linden: and started with zk, wait, metadata, wait
----
2018-12-04 14:02:52 UTC - Paul van der Linden: that seems to solve the issue indeed
----
2018-12-04 14:03:45 UTC - Paul van der Linden: that sounds really scary though, I moved more to software which can heal itself as much as possible, this needs manual intervention to get it running even on the first start
----
2018-12-04 14:03:52 UTC - Paul van der Linden: mmh maybe not
----
2018-12-04 14:03:58 UTC - Paul van der Linden: already 2 restarts on bookie-1
----
2018-12-04 14:04:09 UTC - Paul van der Linden: nope
----
2018-12-04 14:04:12 UTC - Paul van der Linden: still the same problem
----
2018-12-04 14:04:19 UTC - Paul van der Linden: I cheered to early
----
2018-12-04 14:04:27 UTC - Ivan Kelly: ok, what sequence did you use?
----
2018-12-04 14:04:41 UTC - Paul van der Linden: kubectl apply -f k8s/01-namespace.yml
----
2018-12-04 14:04:46 UTC - Paul van der Linden: kubectl apply -f k8s/02-zookeeper.yaml
----
2018-12-04 14:04:51 UTC - Paul van der Linden: *wait* till zk is up
----
2018-12-04 14:04:58 UTC - Paul van der Linden: kubectl apply -f k8s/cluster-metadata.yaml
----
2018-12-04 14:05:07 UTC - Paul van der Linden: *wait* till job is succesful
----
2018-12-04 14:05:22 UTC - Paul van der Linden: kubectl apply -f k8s/ (spinning up all others)
----
2018-12-04 14:13:07 UTC - Ivan Kelly: ok, i'm getting the same error
----
2018-12-04 14:13:46 UTC - Paul van der Linden: ok
----
2018-12-04 14:14:39 UTC - Paul van der Linden: that's a good start :wink:
----
2018-12-04 14:24:36 UTC - Ivan Kelly: how do you stop autoreboot in k8s?
----
2018-12-04 14:27:43 UTC - Paul van der Linden: Rebooting the pods you mean?
----
2018-12-04 14:28:56 UTC - Paul van der Linden: Currently the pod stops and kubernetes starts it again
----
2018-12-04 14:31:44 UTC - Ivan Kelly: ya, i think that's blowing away logs I want to see
----
2018-12-04 14:32:57 UTC - Paul van der Linden: Maybe if you run something which doesn't stop by itself and then use exec to run bookie yourself?
----
2018-12-04 14:34:00 UTC - Paul van der Linden: Or you can mount a hostdir for the logs and view them with minikube ssh
----
2018-12-04 14:38:14 UTC - Ivan Kelly: ok, it looks like the issue is that both bookies are getting the same advertised address
----
2018-12-04 14:39:33 UTC - Paul van der Linden: That's strange
----
2018-12-04 14:39:48 UTC - Paul van der Linden: The IP and hostname are different
----
2018-12-04 14:41:43 UTC - Ivan Kelly: well, the manifest is taking the address from status.hostIp
----
2018-12-04 14:41:55 UTC - Ivan Kelly: which looks like it's coming from my hosts eth0
----
2018-12-04 14:42:39 UTC - Paul van der Linden: That should be different in kubernetes though
----
2018-12-04 14:45:27 UTC - Ivan Kelly: I think so, assuming each bookie container is on a different physical node
----
2018-12-04 14:45:46 UTC - Ivan Kelly: but really, it would make more sense to use the internal docker ip i think
----
2018-12-04 14:47:56 UTC - Ivan Kelly: one sec, need to read about k8s routing
----
2018-12-04 14:51:08 UTC - Paul van der Linden: ok
----
2018-12-04 14:51:09 UTC - Paul van der Linden: thanks
----
2018-12-04 14:51:56 UTC - Paul van der Linden: the networking from docker is overridden in kubernetes as far as I know
----
2018-12-04 14:53:06 UTC - Paul van der Linden: all containers have their own eth0 with their own ip and own hostname though
----
2018-12-04 14:53:29 UTC - Paul van der Linden: so unless bookie thinks it's 127.0.0.1 it should be fine
----
2018-12-04 14:54:29 UTC - Ivan Kelly: it's taking it from the host for me
----
2018-12-04 14:54:36 UTC - Ivan Kelly: i think it should use podIp rather than hostIp
----
2018-12-04 14:54:40 UTC - Ivan Kelly: let me try
----
2018-12-04 14:55:32 UTC - Paul van der Linden: I can see that indeed
----
2018-12-04 14:57:30 UTC - Paul van der Linden: oh it;s in the manifests
----
2018-12-04 14:58:18 UTC - Paul van der Linden: that might be because that manifest used to be a daemonset
----
2018-12-04 14:58:35 UTC - Paul van der Linden: not sure why it's made like that
----
2018-12-04 14:59:05 UTC - Ivan Kelly: daemonset is where is guarantees only one per physical box, no?
----
2018-12-04 14:59:20 UTC - Paul van der Linden: yes
----
2018-12-04 14:59:26 UTC - Paul van der Linden: exactly one basically
----
2018-12-04 14:59:48 UTC - Paul van der Linden: in the gke there is nothing like that
----
2018-12-04 15:00:33 UTC - Paul van der Linden: now the cookie is wrong though
----
2018-12-04 15:02:01 UTC - Ivan Kelly: did you clear out the PVs?
----
2018-12-04 15:02:14 UTC - Paul van der Linden: not yet, just did that
----
2018-12-04 15:02:19 UTC - Paul van der Linden: let me see what happens now
----
2018-12-04 15:04:10 UTC - Paul van der Linden: ok
----
2018-12-04 15:04:17 UTC - Paul van der Linden: I think it's up and running now
----
2018-12-04 15:05:55 UTC - Paul van der Linden: I haven't had the change to write some test scripts yet
----
2018-12-04 15:06:03 UTC - Paul van der Linden: but it seems to be up and running normally now
----
2018-12-04 15:06:40 UTC - Paul van der Linden: the changes I have made:
- initcontainer removed
- removed advertisedAddress environment variable
----
2018-12-04 15:06:53 UTC - Paul van der Linden: I don't seem to need the separate start of zookeeper or the sleep
----
2018-12-04 15:07:21 UTC - Paul van der Linden: It might crash once or twice at the start because zookeeper is not alive yet, but after that it seems to boot normally
----
2018-12-04 15:07:54 UTC - Ivan Kelly: sure. there's a script to deal with that on the image, it's just not being used
----
2018-12-04 15:08:20 UTC - Ivan Kelly: like it waits for zookeeper to be available, and allows you to wait for znodes to be created
----
2018-12-04 15:08:39 UTC - Ivan Kelly: we use it in testing, it just never got put in k8s specs
----
2018-12-04 15:08:49 UTC - Paul van der Linden: ok
----
2018-12-04 15:10:07 UTC - Paul van der Linden: shall I open an issue, or do you do that?
----
2018-12-04 15:10:41 UTC - Ivan Kelly: can you do it, and tag me on it
----
2018-12-04 15:11:26 UTC - Paul van der Linden: sure
----
2018-12-04 15:19:35 UTC - Paul van der Linden: <https://github.com/apache/pulsar/issues/3121>
----
2018-12-04 15:19:43 UTC - Paul van der Linden: Do I need to add more info?
----
2018-12-04 15:23:50 UTC - Ivan Kelly: nope, that's perfect.
----
2018-12-04 15:34:24 UTC - Paul van der Linden: thanks for the help :slightly_smiling_face:
----
2018-12-04 16:30:55 UTC - Karthik Palanivelu: Team, I reposting my question just to track it. This is to test Synchronous Replication. I would like to host a 3 zookeeper on 2181 on 3 different regions and global zookeepers on 2184 on the same instances. I will create 2 clusters A and B to register to this Quorum (Which I am facing issue on `Node Exists Exception : \namespace`). I want to maintain only minimum zookeepers instances shared across clusters. Can you please advise whether this will work or help me how to achieve this design?
----
2018-12-04 17:05:22 UTC - Sijie Guo: @Paul van der Linden looks like the issue is because you were trying to change a daemon set k8s deployment to a statefulset deployment, but still using hostIp as the advertised  address.

FYI. Generic is for daemonset deployment because it is assuming there is no PVs available in a generic deployment; if you want to use a statefulset deployment, a cloud one like GKE is better to use or modify. 

I will take a closer look at your issue though.
----
2018-12-04 17:32:59 UTC - Paul van der Linden: Yeah, I realize that now.  I don't think the hostIp is needed though
----
2018-12-04 17:46:56 UTC - Ivan Kelly: podIp should work for both cases. or hostnameAsBookieId
+1 : Paul van der Linden
----
2018-12-04 18:09:09 UTC - Paul van der Linden: without picking it up manually it also works as far as I know
----
2018-12-04 18:12:12 UTC - Sijie Guo: Well, that’s not true. The principle is using stable identifier for your bookkeeper deployment. In daemonset, only hostIp is a stable identifier to use; while in statefulset, the pod hostname is the one to use.
----
2018-12-04 18:12:39 UTC - Sijie Guo: Using podIp is not correct for bookkeeper deployment 
----
2018-12-04 18:12:51 UTC - Sijie Guo: PodIP changes when the pod restarts 
----
2018-12-04 18:13:27 UTC - Sijie Guo: The suggestion of using PodIP is misleading 
----
2018-12-04 18:14:37 UTC - Sijie Guo: The problem of using PodIP only occurs after you restart pods 
----
2018-12-04 18:14:45 UTC - Paul van der Linden: ah, my mistake
----
2018-12-04 18:15:18 UTC - Paul van der Linden: I'm not sure if there is anything against a statefulset with a preferred anti-affinity set
----
2018-12-04 18:16:05 UTC - Paul van der Linden: or it might be good to document(maybe with comments in the yaml), why certain things are set like that
----
2018-12-04 18:17:19 UTC - Sijie Guo: I think there is one section documenting that 
----
2018-12-04 18:17:22 UTC - Paul van der Linden: it would be helpful, as now with generic I expected get something similar as GKE, but instead it's pretty different (no persistent storage, no replication for bookies)
----
2018-12-04 18:17:26 UTC - Paul van der Linden: Did I miss that?
----
2018-12-04 18:18:48 UTC - Sijie Guo: BK has its own replication. So there is no technical differences between statefulset and daemonset.
----
2018-12-04 18:19:57 UTC - Sijie Guo: StatefulSet is good for cloud or your K8 already have persistent volumes. But the downside is you have extra replication in persistent volumes 
----
2018-12-04 18:20:48 UTC - Sijie Guo: Daemonset is good for a k8 cluster that doesn’t have persistent volumes.
----
2018-12-04 18:21:14 UTC - Paul van der Linden: ok, that might make sense indeed
----
2018-12-04 18:21:24 UTC - Paul van der Linden: although not sure if their would be many clusters without PV's
----
2018-12-04 18:21:39 UTC - Paul van der Linden: usually there is some support available
----
2018-12-04 18:22:07 UTC - Paul van der Linden: I can't find it in the docs though
----
2018-12-04 18:22:23 UTC - Sijie Guo: In either deployment, the key is to choose an identifier as the bookie id, the identifier should not be changed upon pod restarts 
----
2018-12-04 18:24:04 UTC - Sijie Guo: Why the examples in generic is daemonset is because people typically use it for trying it out and customize to its own k8 deployment. In that way, using a daemonset for generic examples is making sense for most of k8 clusters 
----
2018-12-04 18:24:22 UTC - Sijie Guo: Let me find the documentation when I am close to a laptop
----
2018-12-04 18:24:31 UTC - Paul van der Linden: thanks!
----
2018-12-04 18:40:39 UTC - Sijie Guo: @Paul van der Linden - I think the documentation was in bookkeeper. <http://bookkeeper.apache.org/docs/latest/deployment/kubernetes/#deploy-bookies> I will update them to pulsar as well.
----
2018-12-04 18:41:11 UTC - Paul van der Linden: Ah nice, thanks!
----
2018-12-04 18:42:29 UTC - Sijie Guo: can you share the steps on how you create the clusters? it will help me understand better how the exception was thrown
----
2018-12-04 19:19:19 UTC - Ivan Kelly: ah, I thought podIp was stable. in any case, this should be documented in the yaml itself. and we should consider just enabling useHostnameAsBookieId, as that seems stable too and would work with minikube, which a lot of people will try this out on
----
2018-12-04 19:19:50 UTC - Sijie Guo: hostname is only stable when you using daemonset or statefulset.
----
2018-12-04 19:19:59 UTC - Sijie Guo: sorry
----
2018-12-04 19:20:06 UTC - Sijie Guo: only in statefulset
----
2018-12-04 19:20:45 UTC - Ivan Kelly: really? how does it change with daemonset?
----
2018-12-04 19:21:04 UTC - Sijie Guo: hostname in a pod is the pod name
----
2018-12-04 19:21:21 UTC - Sijie Guo: the only deployment mode in k8s that has sticky pod name is statefulset
----
2018-12-04 19:21:41 UTC - Sijie Guo: so in daemonset, we have to use hostIp
----
2018-12-04 19:22:03 UTC - Ivan Kelly: ugh
----
2018-12-04 19:23:24 UTC - Ivan Kelly: this needs a big documentation block in the yaml
----
2018-12-04 19:23:37 UTC - Sijie Guo: yes.
----
2018-12-04 19:27:34 UTC - Ivan Kelly: also, the bookies shouldn't be metaformatting
----
2018-12-04 19:52:59 UTC - Karthik Palanivelu: I created one instance of ZK along with Global ZK in an EC2. It all Came up good by running cluster metadata for Cluster A. Then I deployed bookies, broker and proxies in a Kubernetes Cluster on Region A. It joined the ZK created earlier and was able to produce/consume messages. Next I try to ran the Cluster Metadata for Cluster B, it is failing. Seems like I am missing some config to configure ZK to be shared across two Clusters. Can you please help?
----
2018-12-04 21:20:38 UTC - Christophe Bornet: Do we need to start bookies to assign them a rack via the REST API ? Or can it be done before they connect for the first time to ZK ? Or is there another way to set the initial rack to the bookie ?
----
2018-12-04 21:40:48 UTC - Grant Wu: Question: when the Python client’s Consumer times out (with `timeout_millis` set to a non-None value) - what exception is thrown?
----
2018-12-04 21:50:03 UTC - Matteo Merli: That should be a `PulsarException`, though I just saw that unit tests are not asserting for the exception type
----
2018-12-04 22:28:58 UTC - Grant Wu: It appears to actually just be a regular exception….
----
2018-12-04 22:29:29 UTC - Grant Wu: ```
In [8]: try:
   ...:     c.receive(timeout_millis=500)
   ...: except Exception as e:
   ...:     print(type(e))
   ...:
&lt;class 'Exception'&gt;
```
----
2018-12-04 22:29:36 UTC - Grant Wu: From iPython
----
2018-12-04 22:50:02 UTC - Sijie Guo: you don’t need to bookies to start before assigning them rack. however it is a bit weird, you need to brokers to assign the tracks, because the rest api is going through brokers :slightly_smiling_face:
----
2018-12-04 22:50:12 UTC - Sijie Guo: /cc @Matteo Merli
----
2018-12-04 22:51:04 UTC - Matteo Merli: Yes, it’s kind of strange.. You can always write to the z-node directly though :slightly_smiling_face:
----
2018-12-04 22:51:25 UTC - Matteo Merli: In any case, the rack info can be dynamically changed at any point in time
----
2018-12-04 22:59:34 UTC - Sijie Guo: I think we can provide a tool that updates zookeeper directly ?
----
2018-12-04 23:05:51 UTC - Christophe Bornet: OK thanks. My question was about adding new BK nodes on a live cluster so I guess that's doable.
----
2018-12-04 23:06:04 UTC - Sijie Guo: yes
----
2018-12-05 00:24:56 UTC - Sijie Guo: how do you configure the cluster b? are they pointing to the zookeeper you setup in region A?
----
2018-12-05 00:35:57 UTC - Wang: @Wang has joined the channel
----
2018-12-05 02:08:56 UTC - Watch Jiang: @Watch Jiang has joined the channel
----
2018-12-05 02:10:00 UTC - Grant Wu: I made <https://github.com/apache/pulsar/issues/3127>
----
2018-12-05 02:33:38 UTC - Karthik Palanivelu: Yes that is correct, I thought it creates new znode but the exception says it’s not the case. Idea is to keep zookeeper instances lean to be cost effective to achieve the synchronous replication. Or let me what is the best approach.
----
2018-12-05 06:21:46 UTC - Karthik Ramasamy: @Julien Plissonneau Duquène - we have a tutorial accepted for Strata London. We will be in London April 29 - May 2nd. We can arrange something around that time?
----
2018-12-05 06:56:38 UTC - Tobias Gustafsson: @Tobias Gustafsson has joined the channel
----
2018-12-05 07:25:41 UTC - Sijie Guo: in the case, you don’t need to initialize cluster metadata again. you can just configure brokers and bookies in region b to use the same zookeeper
----