You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@pulsar.apache.org by Apache Pulsar Slack <ap...@gmail.com> on 2019/09/07 09:11:03 UTC

Slack digest for #general - 2019-09-07

2019-09-06 11:19:45 UTC - Mahesh: Hi,
When I try to querystate of a function with the following command. I get this error
pulsar-admin functions querystate --tenant public --namespace default --name wordcount --key the
State storage client is not done initializing. Please try again in a little while.

Reason: HTTP 503 Service Unavailable
What is wrong here ?
Also, the following is set in bookkeeper.conf
extraServerComponents=org.apache.bookkeeper.stream.server.StreamStorageLifecycleComponent
----
2019-09-06 11:54:41 UTC - Ravi Shah: @Sijie Guo Is there any limit for this parameter?
----
2019-09-06 12:47:13 UTC - Jacob O'Farrell: Thank you for sharing!
----
2019-09-06 12:47:31 UTC - Jacob O'Farrell: Is there a timeline / release date for Pulsar Manager? Looks like a great addition
----
2019-09-06 14:21:22 UTC - Emerson Argueta: Is it possible for a producer to send a message to a particular subscription. For example a producer A sends message that can be consumed only by consumer X with subscription name "sub-x"
----
2019-09-06 14:33:05 UTC - David Kjerrumgaard: @Emerson Argueta In order to achieve this scenario you would need to have consumer X use an exclusive subscription on the Topic that the producer is sending to. In addition, you would have to ensure that consumer X's subscription is the ONLY subscription on the topic at all. Otherwise, the same message would be consumed by multiple consumers.
----
2019-09-06 14:41:38 UTC - Emerson Argueta: @David Kjerrumgaard My scenario is the following:
1. Topic "my-topic" is persistent and has a schema that is queried through Pulsar SQL.
2. The messages in "my-topic" are like rows in a traditional database except I cannot delete or update those rows as the Presto connector does not allow it.
3. I would like to be able to delete a row like a traditional transactional  database.

A naive attempt to provide a delete semantic on top of "my-topic":
Apply a key to a producer message  ack message through consumer based on key
----
2019-09-06 14:47:20 UTC - Emerson Argueta: My scenario is the following:
1. Topic "my-topic" is persistent and has a schema that is queried through Pulsar SQL.
2. The messages in "my-topic" are like rows in a traditional database except I cannot delete or update those rows as the Presto connector does not allow it.
3. I would like to be able to delete a row like a traditional transactional  database.

A naive attempt to provide a delete semantic on top of "my-topic":
Apply a key to a producer message  ack message through consumer based on key
Anyone have any thoughts about solutions to this problem/
----
2019-09-06 14:49:30 UTC - ed kerrigan: @ed kerrigan has joined the channel
----
2019-09-06 14:49:53 UTC - David Kjerrumgaard: @Emerson Argueta That use case reminds me of how records are updated in a distributed key-value store such as HBase.  In those systems they use 2 additional meta-data fields; tombstone markers to indicate if a record has been deleted or not and timestamps to preserve ordering of the state of the updates, etc. When the user queries the data the deleted records are ignored, and only the most recent version of the record is returned (All of this is handled internally by HBase).
----
2019-09-06 14:50:31 UTC - David Kjerrumgaard: Perhaps you could replicate this type of behavior by added two meta-data fields to your data as well?
----
2019-09-06 15:06:49 UTC - Emerson Argueta: @David Kjerrumgaard Thanks for the suggestion! Yes I think I could try this.
----
2019-09-06 15:09:50 UTC - Emerson Argueta: I think I also need to figure out how to offload the expired rows to bucket storage and keep the non-deleted rows.
----
2019-09-06 15:12:18 UTC - David Kjerrumgaard: @Emerson Argueta Then you will want to emulate HBase's minor and major compaction processes.  The minor compaction process runs frequently and purges the in-memory cache, while the major compaction process purges the on-disk data.  Good luck!
+1 : Emerson Argueta, jgil
----
2019-09-06 15:19:22 UTC - pr1568: @pr1568 has joined the channel
----
2019-09-06 16:23:16 UTC - Addison Higham: is pulsar manager open source? that does look really appealing
----
2019-09-06 17:01:36 UTC - Gilberto Muñoz Hernández: Hey guys, I am using the spark pulsar receiver from a yarn cluster, but I cant make it work in multiple worker nodes.
----
2019-09-06 17:01:56 UTC - Gilberto Muñoz Hernández: I am using a topic with 3 partitions
----
2019-09-06 17:02:12 UTC - Gilberto Muñoz Hernández: And 3 executors with 4 cores each
----
2019-09-06 17:02:41 UTC - Gilberto Muñoz Hernández: But all I see from the pulsar logs is a single worker node connecting to all partitions
----
2019-09-06 17:03:14 UTC - Gilberto Muñoz Hernández: And I am also using SHARED mode
----
2019-09-06 17:04:42 UTC - Gilberto Muñoz Hernández: Any help? Documentation says nothing about this, I know data is being processed in multiple spark nodes but the receiver is only running in a single node (guessing its getting shuffled internally after getting it)
tired_face : Gilberto Muñoz Hernández
----
2019-09-06 17:06:16 UTC - Matteo Merli: When consuming from a partitioned topic, with a shared subscription, the consumers will consume messages from all the partitions
----
2019-09-06 17:06:46 UTC - Matteo Merli: &gt; But all I see from the pulsar logs is a single worker node connecting to all partitions
----
2019-09-06 17:07:04 UTC - Matteo Merli: You mean the other workers are not attempting to connect?
----
2019-09-06 17:08:25 UTC - Gilberto Muñoz Hernández: I enter each broker, and search in each log, all I found about my subscription name, was entries from the same worker node
----
2019-09-06 17:08:35 UTC - Gilberto Muñoz Hernández: same ip lets say
----
2019-09-06 17:08:57 UTC - Gilberto Muñoz Hernández: connecting to all the partitions in some log or another
----
2019-09-06 17:09:41 UTC - Gilberto Muñoz Hernández: i was expecting to see at least 3 ips connecting to my partitions
----
2019-09-06 17:10:12 UTC - Gilberto Muñoz Hernández: cause am using 3 executors with enough cores to connect to 3 partitions
----
2019-09-06 17:11:02 UTC - Gilberto Muñoz Hernández: 11:29:08.246 [pulsar-io-23-2] INFO  org.apache.pulsar.broker.service.ServerCnx - [/192.168.0.23:36404] Subscribing on topic <persistent://public/default/main-partition-1> / speedTracker
----
2019-09-06 17:11:16 UTC - Gilberto Muñoz Hernández: speed tracker is the subscription name
----
2019-09-06 17:11:28 UTC - Gilberto Muñoz Hernández: and that ip is one of the yarn nodes
----
2019-09-06 17:12:00 UTC - Gilberto Muñoz Hernández: all i could find along all the logs is that ip connecting to all partitions
----
2019-09-06 17:12:20 UTC - Gilberto Muñoz Hernández: any ideas?
----
2019-09-06 17:27:32 UTC - Poule: @Addison Higham yup
----
2019-09-06 17:27:38 UTC - Poule: <https://github.com/streamnative/pulsar-manager>
----
2019-09-06 17:27:55 UTC - Addison Higham: :party-parrot:
----
2019-09-06 17:28:05 UTC - Poule: lol
----
2019-09-06 17:28:26 UTC - Addison Higham: I saw there was one message in the mailing list about apache con, but wonder if it might be easier to co-ordinate here. I will be there...
----
2019-09-06 17:32:09 UTC - David Kjerrumgaard: I will be there as well, giving a presentation on Monday, and will be at the BoF and other sessions Weds and Thursday.
----
2019-09-06 17:43:38 UTC - Kendall Magesh-Davis: It did not. Is that essential for BK to self-replicate? I was under the impression that a BK could have its ledger recreated by other BKs.
----
2019-09-06 17:46:23 UTC - Matteo Merli: You can also check the topic stats
----
2019-09-06 17:47:01 UTC - Matteo Merli: `pulsar-admin topics stats $TOPIC`
----
2019-09-06 17:48:14 UTC - Matteo Merli: Or check the logs in the spark receiver to see why it’s not attempting to connect
----
2019-09-06 17:53:02 UTC - Luke Lu: It might be more useful to have a managedLedgerMaxSizePerLedgerMB (instead of max entries) as people have topics of different average message rate and size and want to maintain a consistent ledger sizes
----
2019-09-06 18:57:04 UTC - Gilberto Muñoz Hernández: ok from the stats i can clearly see now there is indeed just one consumer associated with my subscription
----
2019-09-06 18:57:52 UTC - Ali Ahmed: there is something wrong with the new libpulsar brew formulae
```
==&gt; Upgrading libpulsar 
==&gt; Downloading <https://homebrew.bintray.com/bottles/libpulsar-2.4.1.mojave.bottle.tar.gz>
==&gt; Downloading from <https://akamai.bintray.com/29/290ddc98bdf6ff8d4d527a4bb7051e5b38c577768870bde489967fe15c16e7ef?__gd>
######################################################################## 100.0%
Error: SHA256 mismatch
Expected: 229f907ec33a5f53e21cadbbb554c6c14d1df7a4a9a99ec8158554221016a84d
  Actual: 290ddc98bdf6ff8d4d527a4bb7051e5b38c577768870bde489967fe15c16e7ef
 Archive: /Users/a.ahmed/Library/Caches/Homebrew/downloads/3b58466a6822adc38b470b125c88387973bee4a83f86699c64d7ee13a8307249--libpulsar-2.4.1.mojave.bottle.tar.gz
To retry an incomplete download, remove the file above.
Warning: Bottle installation failed: building from source.
```
----
2019-09-06 19:00:36 UTC - Gilberto Muñoz Hernández: and from the logs of spark
----
2019-09-06 19:00:52 UTC - Gilberto Muñoz Hernández: you can see no other node even try to connect
----
2019-09-06 19:00:55 UTC - Gilberto Muñoz Hernández: just one
----
2019-09-06 19:01:18 UTC - Gilberto Muñoz Hernández: and even funnier, it says its a Exclusive subscription
----
2019-09-06 19:01:29 UTC - Gilberto Muñoz Hernández: to a null topic name !!!!!
----
2019-09-06 19:01:37 UTC - Gilberto Muñoz Hernández: but it's wotking !!!!
----
2019-09-06 19:02:04 UTC - Gilberto Muñoz Hernández: 19/09/06 13:34:13 INFO impl.ConsumerStatsRecorderImpl: Starting Pulsar consumer perf with config: {
  "topicNames" : [ ],
  "topicsPattern" : null,
  "subscriptionName" : "speedTracker",
  "subscriptionType" : "Exclusive",
----
2019-09-06 19:02:10 UTC - Gilberto Muñoz Hernández: wtf!!!
exploding_head : Gilberto Muñoz Hernández
----
2019-09-06 19:08:27 UTC - Gilberto Muñoz Hernández: JavaStreamingContext jsc = new JavaStreamingContext(sparkConf, Durations.seconds(windowSeconds));

		ConsumerConfigurationData&lt;byte[]&gt; pulsarConf = new ConsumerConfigurationData&lt;byte[]&gt;();

		Set&lt;String&gt; set = new HashSet&lt;&gt;();
		set.add(pulsarTopic);
		pulsarConf.setTopicNames(set);
		pulsarConf.setSubscriptionName("speedTracker");
		pulsarConf.setSubscriptionType(SubscriptionType.Shared);
		pulsarConf.setConsumerName("speedTracker-" + InetAddress.getLocalHost().getHostName());

		SparkStreamingPulsarReceiver pulsarReceiver = new SparkStreamingPulsarReceiver(pulsarUrl, pulsarConf,
				new AuthenticationDisabled());

		JavaReceiverInputDStream&lt;byte[]&gt; lineDStream = jsc.receiverStream(pulsarReceiver);
----
2019-09-06 19:09:13 UTC - Gilberto Muñoz Hernández: this is my java code, there you can see the Shared mode
----
2019-09-06 19:11:54 UTC - Ali Ahmed: it still works though it just builds from source
----
2019-09-06 19:13:38 UTC - Gilberto Muñoz Hernández: by the way thats spark 2.4.3 yarn 2.9.2 and pulsar 2.3.2
----
2019-09-06 19:21:27 UTC - Gilberto Muñoz Hernández: please @Matteo Merli just tell me if this can be done, if I can really consume records from different executors or the only way is to have one executor consuming all data and then sending it to the other executors for processing
----
2019-09-06 19:22:23 UTC - Matteo Merli: it’s definitely possible, the executors are internally wrapping Pulsar consumers
----
2019-09-06 19:23:41 UTC - Gilberto Muñoz Hernández: is there any example out there, any code (don't care the language)
----
2019-09-06 19:23:55 UTC - Matteo Merli: let me check one sec on the code
----
2019-09-06 19:24:09 UTC - Matteo Merli: on why it doesn’t pick up the Shared vs Exclusive
----
2019-09-06 19:26:09 UTC - Matteo Merli: I think it’s a bug in the `SparkStreamingPulsarReceiver`, that doesn’t pick up that config param
----
2019-09-06 19:26:13 UTC - Matteo Merli: …
----
2019-09-06 19:26:43 UTC - Matteo Merli: so it’s always “exclusive” (which is the default)..
----
2019-09-06 19:26:51 UTC - Matteo Merli: and only 1 consumer is allowed to connect
----
2019-09-06 19:26:58 UTC - Gilberto Muñoz Hernández: Could it be the java implementation only
----
2019-09-06 19:27:08 UTC - Gilberto Muñoz Hernández: are you asking or saying?
----
2019-09-06 19:27:51 UTC - Gilberto Muñoz Hernández: did you see my code?
----
2019-09-06 19:31:36 UTC - Matteo Merli: no no, the bug is in <https://github.com/apache/pulsar/blob/master/pulsar-spark/src/main/java/org/apache/pulsar/spark/SparkStreamingPulsarReceiver.java#L91>
----
2019-09-06 19:31:49 UTC - Matteo Merli: I’ll send a fix quickly
----
2019-09-06 19:33:59 UTC - Gilberto Muñoz Hernández: wow thanks a lot man
----
2019-09-06 19:36:46 UTC - Gilberto Muñoz Hernández: so, should i filled an issue?
----
2019-09-06 19:37:26 UTC - Gilberto Muñoz Hernández: am guessing i will have to wait for some release soon to been able to receive in parallel
----
2019-09-06 19:40:03 UTC - Gilberto Muñoz Hernández: also, not an expert here, but...
----
2019-09-06 19:40:40 UTC - Gilberto Muñoz Hernández: the "setConsumerName" prop, how exactly is that suppose to work
----
2019-09-06 19:41:44 UTC - Gilberto Muñoz Hernández: you create the configuration once in the driver, so in the end all the executors will receive the same configurations and therefore will use the same consumer name
----
2019-09-06 19:41:58 UTC - Gilberto Muñoz Hernández: isn't that a problem?
----
2019-09-06 19:42:51 UTC - Gilberto Muñoz Hernández: that all the executors in shared mode are using the same consumer name?
----
2019-09-06 20:10:20 UTC - Sijie Guo: David, have you update that email thread about the time of BoF?
----
2019-09-06 20:12:07 UTC - David Kjerrumgaard: I have not
----
2019-09-06 20:18:27 UTC - David Kjerrumgaard: I am not allowed to email the <http://apache.org|apache.org> dev list with a <http://non-apache.org|non-apache.org> email address
----
2019-09-06 20:18:39 UTC - David Kjerrumgaard: <https://www.apachecon.com/acna19/s/#/schedule?search=Pulsar>
----
2019-09-06 20:19:15 UTC - David Kjerrumgaard: Apache Pulsar @ ApacheCon Las Vegas - Sept 9-14.   <https://www.apachecon.com/acna19/s/#/schedule?search=Pulsar>
----
2019-09-06 21:10:14 UTC - Matteo Merli: You need to update with the appropriate sha256 checksum
----
2019-09-06 21:20:42 UTC - Ali Ahmed: <https://github.com/Homebrew/homebrew-core/pull/43956>
----
2019-09-06 21:26:41 UTC - Sijie Guo: okay I saw you just sent out one
----
2019-09-06 21:26:51 UTC - Sijie Guo: I assumed that you can send the email :slightly_smiling_face: right?
----
2019-09-06 21:38:04 UTC - Rajiv Abraham: Hi,
I'm trying to follow the tutorial replacing the 2.3.0 version with 2.4.1. <https://debezium.io/blog/2019/05/23/tutorial-using-debezium-connectors-with-apache-pulsar/>.
I'm using the docker image 'apachepulsar/pulsar-all:2.4.1'
Since I'm using a docker image for pulsar and I'm running mysql locally. I'm temporarily routing it via ngrok as I don't know how to connect them otherwise :((having failed to so).
When I do the below on the docker image
```
bin/pulsar-admin source localrun  --sourceConfigFile debezium-mysql-source-config.yaml
```
I get
```
...1:36:09.110 [pulsar-client-io-1-1] INFO  org.apache.pulsar.client.impl.ProducerImpl - [kafka-connect-topic] [standalone-1-23] Closed Producer
21:36:09.124 [main] INFO  org.apache.pulsar.functions.LocalRunner - RuntimeSpawner quit because of
java.lang.ClassNotFoundException: io.debezium.connector.mysql.MySqlConnectorTask
	at java.net.URLClassLoader.findClass(URLClassLoader.java:382) ~[?:1.8.0_212]
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[?:1.8.0_212]
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[?:1.8.0_212]
	at java.lang.Class.forName0(Native Method) ~[?:1.8.0_212]
	at java.lang.Class.forName(Class.java:264) ~[?:1.8.0_212]
	at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:87) ~[?:?]
	at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:716) ~[org.apache.pulsar-pulsar-functions-instance-2.4.1.jar:2.4.1]
	at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:205) ~[org.apache.pulsar-pulsar-functions-instance-2.4.1.jar:2.4.1]
	at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:234) ~[org.apache.pulsar-pulsar-functions-instance-2.4.1.jar:2.4.1]
	at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_212]
```
My yaml file is
```
tenant: "test"
namespace: "test-namespace"
name: "debezium-kafka-source"
topicName: "kafka-connect-topic"
archive: "connectors/pulsar-io-kafka-connect-adaptor-2.4.1.nar"

parallelism: 1

configs:
  ## sourceTask
  task.class: "io.debezium.connector.mysql.MySqlConnectorTask"

  ## config for mysql, docker image: debezium/example-mysql:0.8
  database.hostname: "<http://e6751906.ngrok.io|e6751906.ngrok.io>"
  database.port: "80"
  database.user: "debezium"
  database.password: "dbz"
  database.server.id: "184054"
  database.server.name: "dbserver1"
  database.whitelist: "inventory"

  database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory"
  database.history.pulsar.topic: "history-topic"
  database.history.pulsar.service.url: "<pulsar://127.0.0.1:6650>"
  ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG
  key.converter: "org.apache.kafka.connect.json.JsonConverter"
  value.converter: "org.apache.kafka.connect.json.JsonConverter"
  ## PULSAR_SERVICE_URL_CONFIG
  pulsar.service.url: "<pulsar://127.0.0.1:6650>"
  ## OFFSET_STORAGE_TOPIC_CONFIG
  offset.storage.topic: "offset-topic"
```
----
2019-09-06 21:38:26 UTC - Rajiv Abraham: Let me know if I should raise this on the github page
----
2019-09-06 21:57:44 UTC - David Kjerrumgaard: This was the email I received back when I sent the email out. But based on your response, it sounds like you received it anyways..... :smiley:
----
2019-09-06 22:17:35 UTC - Sijie Guo: @Rajiv Abraham I think there are refactors around debezium connectors between 2.3.x and 2.4.x. Please use this tutorial for 2.4.x connectors:  <http://pulsar.apache.org/docs/en/next/io-cdc-debezium/>
----
2019-09-06 22:17:54 UTC - Sijie Guo: lol yeah I saw that email :slightly_smiling_face:
----
2019-09-06 22:18:44 UTC - Matteo Merli: the PRs are validated on the homebrew CI.. has the binary for 2.4.1 changed??
----
2019-09-06 22:19:24 UTC - David Kjerrumgaard: I think you were the only one to receive it and the <ma...@aprache.org> group did NOT.  So if you want to forward it to that email group you will have to do so using your <http://apache.org|apache.org> email address.
----
2019-09-06 22:19:47 UTC - Ali Ahmed: maybe
----
2019-09-06 22:19:49 UTC - Ali Ahmed: don’t know
----
2019-09-06 22:20:08 UTC - Sijie Guo: okay let me do it
----
2019-09-06 22:20:32 UTC - Matteo Merli: that would be bad
----
2019-09-06 22:20:43 UTC - Matteo Merli: …very bad
----
2019-09-06 22:20:50 UTC - Matteo Merli: also, we should remove <https://github.com/henryiii/homebrew-core/blob/7752a52f18f8b727439ecbe414889feeeb55361d/Formula/libpulsar.rb#L26>
----
2019-09-06 22:23:22 UTC - Sijie Guo: just replied to your email. hopefully it will bring your content to the mailing list.
+1 : David Kjerrumgaard
----
2019-09-06 22:34:02 UTC - Vladimir Shchur: Hi! Another .net client released! <https://www.nuget.org/packages/Pulsar.Client/>
+1 : David Kjerrumgaard, Sijie Guo, Ali Ahmed, Matteo Merli, dba
----
2019-09-06 22:57:46 UTC - Matteo Merli: @Ali Ahmed you have updated the wrong hash in the PR
----
2019-09-06 22:58:29 UTC - Matteo Merli: is it really the bottle hash that is out of sync?
----
2019-09-06 22:58:44 UTC - Ali Ahmed: yes the mojave one
----
2019-09-06 22:58:51 UTC - Ali Ahmed: I am using mojave
----
2019-09-06 22:59:11 UTC - Ali Ahmed: src hash is fine
----
2019-09-06 23:59:54 UTC - Chris Bartholomew: Yes, BK can self replicate. You are just triggering a safety check. Based on the configuration, the bookie expects to see specific files in the listed directories, but it has found them empty. If this is an error, for example, the disk didn't get mounted, then you can correct it and restart. If you really don't have the data anymore (like in this case), you just need to tell the bookie to start up anyway. Removing the cookie in ZK should do that for you. This is how I clear the cookie when running in k8s:
----
2019-09-06 23:59:58 UTC - Chris Bartholomew: ```kubectl exec -it &lt;pod&gt; /bin/bash
bin/pulsar zookeeper-shell
ls /ledgers/cookies
delete /ledgers/cookies/&lt;cookie-for-bad-bookie&gt;```
----
2019-09-07 00:46:30 UTC - Jacob O'Farrell: Thanks @Poule!
----
2019-09-07 03:09:38 UTC - Nicolas Ha: Installed it, looks useful! Thanks :smile:
----
2019-09-07 04:12:18 UTC - Rajiv Abraham: @Sijie Guo Thanks. will, try, btw, there is a typo in the conf file. 'tenant: "pubilc"
----
2019-09-07 06:15:05 UTC - Vladimir Shchur: By the way, it would be great if there was some documentation of how to add support for custom language for pulsar functions, similar to what we have for custom pulsar client
----
2019-09-07 06:16:43 UTC - Ali Ahmed: have a look at the pip and pr’s for go functions
----
2019-09-07 06:19:37 UTC - Sijie Guo: @Vladimir Shchur I would actually suggest holding on adding a new language support for functions. We have found that the function runtime becomes a bit complicated when introducing go function. There was an outstanding issue for refactoring the function runtime to provide a better abstraction and interface for easier to plug-in different language runtime. Although we haven’t picked up the refactor yet.
----
2019-09-07 06:20:43 UTC - Sijie Guo: Once we did the refactor, we will addd a development guide for adding a new language runtime.
----
2019-09-07 06:21:39 UTC - Sijie Guo: @Ali Ahmed I don’t think go functions is a good example to follow. because Golang is too special comparing to java and python.
----
2019-09-07 06:23:00 UTC - Sijie Guo: <https://github.com/apache/pulsar/issues/4176> this was the issue for the runtime refactor
----
2019-09-07 06:37:49 UTC - Vladimir Shchur: I see, thank you
----
2019-09-07 06:55:05 UTC - Addison Higham: woo! just got geo-replication working across 3 clusters across 3 regions with a global zookeeper that spans those regions. A few wrinkles (which I will try and jot down here sometime) but all told, nothing too crazy
100 : Sijie Guo, Ali Ahmed, Jacob O'Farrell
----
2019-09-07 07:35:59 UTC - Jacob O'Farrell: Awesome! is this running on baremetal, kubernetes or something else all together?
----