You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@pulsar.apache.org by Apache Pulsar Slack <ap...@gmail.com> on 2019/04/04 09:11:04 UTC

Slack digest for #general - 2019-04-04

2019-04-03 10:28:21 UTC - Michael Bongartz: @jia zhai nevermind, for some reasons it is working on another pulsar proxy VM with exactly the same config/secrets file/pulsar release (same perm, same files, files integrity checked). It was really weird.
ok_hand : jia zhai
----
2019-04-03 11:23:36 UTC - bhagesharora: Hi there,
I am implementing pulsar-kafka-adaptor following through java program.
Reference URL - <http://pulsar.apache.org/docs/latest/adaptors/KafkaWrapper/>
I haved added pulsar-client-kafka dependency in my pom.xml, and createed ProducerExample.java &amp; ConsumerExample.java following
Reference URL - <https://github.com/apache/pulsar/tree/master/pulsar-client-kafka-compat/pulsar-client-kafka-tests>
producer class is running fine anf able to produce a message but Consumer is not working properly.
see below screenshots for ProducerExample.java/ConsumerExample.java and output from both.
What would be the reason for this ??
----
2019-04-03 11:24:03 UTC - bhagesharora: 
----
2019-04-03 11:24:18 UTC - bhagesharora: 
----
2019-04-03 11:25:15 UTC - bhagesharora: 
----
2019-04-03 11:25:27 UTC - bhagesharora: 
----
2019-04-03 13:28:07 UTC - Marc Le Labourier: Does anyone know how is the quota of producer by topics is determined ?
----
2019-04-03 14:01:23 UTC - Chris DiGiovanni: Trying to setup a multicluster from the Docs and when I initialize the metadata I see the following error.  Right now I have the local cluster zookeeper process and configuration-store process on the same machines.
```
Exception in thread "main" org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode = NodeExists for /namespace
        at org.apache.zookeeper.KeeperException.create(KeeperException.java:122)
        at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
        at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:792)
        at org.apache.pulsar.PulsarClusterMetadataSetup.main(PulsarClusterMetadataSetup.java:178)
```
----
2019-04-03 14:03:05 UTC - Chris DiGiovanni: Docs say that you can share the same machine you just need to change the ports for the zookeepers.  Is this not the case?
----
2019-04-03 14:25:13 UTC - Sébastien de Melo: Hi @Sijie Guo,
We have finally succeeded by adding
echo "extraServerComponents=org.apache.bookkeeper.stream.server.StreamStorageLifecycleComponent" &gt;&gt; conf/bookkeeper.conf
to the bookkeeper deployment,
PF_stateStorageServiceUrl: bk://{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}:4181
to the broker config map and
sed -i 's/self\.__client__ = kv\.Client(namespace=table_ns)/self.__client__ = kv.Client(storage_client_settings=client_settings, namespace=table_ns)/g' /pulsar/instances/python-instance/state_context.py
to the broker deployment (it didn't work without that since the broker was looking for the table service on itself)
----
2019-04-03 15:17:15 UTC - Chris DiGiovanni: Disregard the above, I found my configuration issue.
----
2019-04-03 16:04:26 UTC - Ryan Samo: Hey guys,
Is there a suggested way of upgrading an existing cluster to a new version in place? Can you perform the upgrade in a rolling fashion without downtime? What order do you perform the upgrade in? Etc? I’m looking to automate this potentially. :)
----
2019-04-03 16:17:02 UTC - Ezequiel Lovelle: idk if there is a standard way to do it, in our business we tend to 1. update bk since should be backward compatible one by one, 2 zookeeper global 3 rest of zookeeper 4 and final, the brokers. Of course this may vary depending on changes between versions, and of course a cluster stage environment reproducing this helps a lot of pain.

I don't remember we ever face downtime in this process, but always it was treated very carefully.
+1 : Yuvaraj Loganathan, Shivji Kumar Jha
----
2019-04-03 16:25:49 UTC - Ezequiel Lovelle: Anyway, would be great to hear other people thoughts and experience :slightly_smiling_face:
----
2019-04-03 16:27:06 UTC - Ryan Samo: Thanks @Ezequiel Lovelle for your input. I too would entertain others experiences 
----
2019-04-03 17:15:26 UTC - vinay Parekar: hi guys, is there any provision to set log levels on log topics?
----
2019-04-03 17:16:56 UTC - David Kjerrumgaard: What do you mean by log levels, do you mean (DEBUG, INFO, WARN, etc)?
----
2019-04-03 17:17:02 UTC - vinay Parekar: yes
----
2019-04-03 17:21:34 UTC - Sanjeev Kulkarni: right now there is no explicit control of that on a per function basis.
----
2019-04-03 17:32:53 UTC - vinay Parekar: ohk thanks sanjeev
----
2019-04-03 17:41:55 UTC - Ryan Samo: Is there a way to query for the closest MessageId to a given time stamp? I see there is a seek ability but if you can get the startingMessageId for a reader given a time stamp that would be very nice
----
2019-04-03 17:42:52 UTC - Ryan Samo: Instead of earliest or latest, a time to start basically 
----
2019-04-03 17:42:54 UTC - FG: @FG has joined the channel
----
2019-04-03 17:51:42 UTC - David Kjerrumgaard: @Ryan Samo We would have to add a method to the ReaderBuilder API
----
2019-04-03 17:53:05 UTC - Sijie Guo: @Ryan Samo I think there was a seek-by-time feature added in 2.3.0 client
----
2019-04-03 17:54:19 UTC - Sijie Guo: oh you are looking of opening a reader by time
----
2019-04-03 17:55:03 UTC - David Kjerrumgaard: @Sijie Guo Which class are you referring to?
----
2019-04-03 17:55:27 UTC - Sijie Guo: I was referring to Consumer#seek(long timestamp)
----
2019-04-03 17:55:59 UTC - Sijie Guo: I think what @Ryan Samo needs is `startMessageId(long timestamp)` in ReaderBuilder
----
2019-04-03 17:56:15 UTC - Ryan Samo: Yeah I have a partitioned topic and 1 reader per partition. I have attempted to do a seek as follows:

// Down cast to get to consumer.seek and perform the cursor adjustment

((ReaderImpl) reader).getConsumer().seekAsync(RelativeTimeUtil.parseRelativeTimeInSeconds("-10m"));

But it always seems to come up with “Failed to reset subscription: Message id was not present”
----
2019-04-03 17:57:07 UTC - Ryan Samo: Correct @Sijie Guo , a reader like you stated or at least that behavior 
----
2019-04-03 17:59:58 UTC - Ryan Samo: Only because I can’t find a good way to position my readers by ledgerid, etc.
----
2019-04-03 18:00:16 UTC - Ryan Samo: On a partitioned topic
----
2019-04-03 18:00:46 UTC - David Kjerrumgaard: This sounds like a good feature request....
----
2019-04-03 18:02:56 UTC - Ryan Samo: :+1:
----
2019-04-03 18:11:02 UTC - David Kjerrumgaard: @Ryan Samo If you cannot wait for the FR, you can TRY using Presto to query the topic for the messageID by timestamp. Then provide that message id to the current Reader API.
----
2019-04-03 18:11:20 UTC - David Kjerrumgaard: definitely a hack.....
----
2019-04-03 18:13:07 UTC - Ryan Samo: Yeah that’s an interesting approach for sure. I’m trying to find a work around, was hoping that downcast to a ReaderImpl would allow the seek to work but not so far
----
2019-04-03 18:16:03 UTC - David Kjerrumgaard: The messageID is essentially a PK into the topic, and we would have to create a secondary index based on the timestamps.
----
2019-04-03 18:18:28 UTC - Sijie Guo: your approach sounds reasonable. do you have messages in `-10m`?
----
2019-04-03 18:20:05 UTC - Ryan Samo: Makes sense. So the docs say that it is up to the app to determine the MessageId. Is there a vision for how you guys see that working. When you’re dealing with 1:N partitions, it gets tough to follow, lending itself to indexing. Earliest and latest work awesome, it’s the in between that’s tough to pinpoint 
----
2019-04-03 18:20:36 UTC - Ryan Samo: Yes, I’m playing with various times to see if I can get it to work 
----
2019-04-03 18:21:51 UTC - Ryan Samo: Not sure on the reader if it would work or not because if you look at the subscription for a partition, it shows the reader name. But if you run the admin command from the CLI to reset-cursor, it says no subscription found
----
2019-04-03 18:24:19 UTC - Sijie Guo: I see. the cursor of a reader is an ephemeral cursor, so it doesn’t actually show up in CLI. but you might be true the reset might not be supported for ephemeral cursor now. Let me file a few issues. I will loop in @xiaolong.ran for looking into these issues since he was contributing the seek-by-time feature.
----
2019-04-03 18:24:55 UTC - Ryan Samo: Thanks @Sijie Guo !
----
2019-04-03 18:35:41 UTC - Jerry Peng: @bhagesharora there seems to be some issue with connecting to the pulsar cluster. Can you try running the following command where you have started the presto standalone cluster:
```
$ pulsar-admin persistent list public/default
```
----
2019-04-03 18:36:29 UTC - Jerry Peng: Where are you running the presto cluster?  On the same machine?
----
2019-04-03 18:36:30 UTC - Sijie Guo: <https://github.com/apache/pulsar/issues/3975>
<https://github.com/apache/pulsar/issues/3976>
----
2019-04-03 18:36:57 UTC - Sijie Guo: @Ryan Samo I create two issues. one for looking into the issue you encountered, one for adding the feature to reader
----
2019-04-03 19:51:11 UTC - Ryan Samo: Thanks for all the help!
----
2019-04-03 19:52:54 UTC - Sree Vaddi: Live Stream Link added.
<https://www.meetup.com/SF-Bay-ACM/events/259921891/>
+1 : Yuvaraj Loganathan
----
2019-04-04 01:35:57 UTC - bossbaby: Can someone show me how to fix it? "
```
01:32:55.283 [BookKeeperClientWorker-OrderedExecutor-1-0] ERROR org.apache.bookkeeper.client.PendingReadOp - Read of ledger entry failed: L55 E0-E0, Sent to [localhost:3181], Heard from [] : bitset = {}, Error = 'No such ledger exists'. First unread entry is (-1, rc = null)
01:32:55.283 [bookkeeper-ml-workers-OrderedExecutor-1-0] WARN  org.apache.bookkeeper.mledger.impl.OpReadEntry - [chain/test/persistent/test2][pull-data-backup-test] read failed from ledger at position:55:0 : No such ledger exists
01:32:55.283 [broker-topic-workers-OrderedScheduler-4-0] ERROR org.apache.pulsar.broker.service.persistent.PersistentDispatcherSingleActiveConsumer - [<persistent://chain/test/test2> / pull-data-backup-test-Consumer{subscription=PersistentSubscription{topic=<persistent://chain/test/test2>, name=pull-data-backup-test}, consumerId=0, consumerName=6fcbac, address=/116.111.13.127:59695}] Error reading entries at 55:0 : No such ledger exists - Retrying to read in 28.595 seconds
```
----
2019-04-04 02:47:51 UTC - Samuel Sun: @bossbaby seems you met this “<https://github.com/apache/bookkeeper/blob/master/bookkeeper-server/src/main/java/org/apache/bookkeeper/client/PendingReadOp.java#L618-L636>” ? does it mean anything wrong with bk ?
----
2019-04-04 04:24:15 UTC - bhagesharora: @Jerry Peng yes in the same machine
----
2019-04-04 05:00:31 UTC - bhagesharora: @Jerry Peng I am getting Read time out. see below
----
2019-04-04 05:00:35 UTC - bhagesharora: bhagesharora93@pulsar-setup-2-3-0-n1-standalone:~/apache-pulsar-2.3.0/bin$ ls
bookkeeper  connectors  function-localrunner  proto  pulsar  pulsar-admin  pulsar-admin-common.sh  pulsar-client  pulsar-daemon  pulsar-managed-ledger-admin  pulsar-perf
bhagesharora93@pulsar-setup-2-3-0-n1-standalone:~/apache-pulsar-2.3.0/bin$ pulsar-admin persistent list public/default
pulsar-admin: command not found
bhagesharora93@pulsar-setup-2-3-0-n1-standalone:~/apache-pulsar-2.3.0/bin$ ./pulsar-admin persistent list public/default
null

Reason: javax.ws.rs.ProcessingException: java.net.SocketTimeoutException: Read timed out
----
2019-04-04 06:33:59 UTC - bossbaby: @Sahaya Andrews Albert my bookkeer is working so i don't know that why does it happen
----
2019-04-04 06:43:28 UTC - Guangzhong Yao: I think you should ensure the topic had already put data to the ledger 55, maybe that ledger is one discard ledger @bossbaby
----
2019-04-04 06:46:51 UTC - bossbaby: i can't check it, i use GEO-rep to push data
----
2019-04-04 06:58:18 UTC - Guangzhong Yao: you can see the internal stats of the topic by using `pulsar-admin topics stats-internal`
----
2019-04-04 07:29:50 UTC - bossbaby: This ledgers, It seems to have lost ledger 55
```
  "ledgers" : [ {
    "ledgerId" : 68,
    "entries" : 33969,
    "size" : 1189018372,
    "offloaded" : false
  }, {
    "ledgerId" : 81,
    "entries" : 17613,
    "size" : 4241053,
    "offloaded" : false
  }, {
    "ledgerId" : 87,
    "entries" : 24185,
    "size" : 5851524,
    "offloaded" : false
  }, {
    "ledgerId" : 94,
    "entries" : 24192,
    "size" : 5855542,
    "offloaded" : false
  }, {
    "ledgerId" : 101,
    "entries" : 24191,
    "size" : 5863767,
    "offloaded" : false
  }, {
    "ledgerId" : 108,
    "entries" : 25584,
    "size" : 6218094,
    "offloaded" : false
  }, {
    "ledgerId" : 114,
    "entries" : 28308,
    "size" : 6876269,
    "offloaded" : false
  }, {
    "ledgerId" : 120,
    "entries" : 29027,
    "size" : 7052333,
    "offloaded" : false
  }, {
    "ledgerId" : 127,
    "entries" : 29028,
    "size" : 7052425,
    "offloaded" : false
  }, {
    "ledgerId" : 134,
    "entries" : 29028,
    "size" : 7061519,
    "offloaded" : false
  }, {
    "ledgerId" : 149,
    "entries" : 0,
    "size" : 0,
    "offloaded" : false
  } ],
```
----
2019-04-04 07:32:52 UTC - Guangzhong Yao: did you lost the data? Maybe no data has been written to it, and  ledger 55 is one discarded ledger.
----
2019-04-04 07:34:28 UTC - Guangzhong Yao: 55 is older than the oldest ledger, maybe it has expired
----
2019-04-04 07:37:12 UTC - Jerry Peng: ya there seems to be some networking issue
----
2019-04-04 07:37:38 UTC - Jerry Peng: does your machine resolve localhost -&gt; 127.0.0.1
----
2019-04-04 07:40:24 UTC - bossbaby: That's a very serious thing, how can I restore it now?
----
2019-04-04 07:44:34 UTC - Guangzhong Yao: that's related your retention setting, otherwise it won't be deleted.
----
2019-04-04 07:46:37 UTC - Guangzhong Yao: But if the ledger has been deleted, you can restore it from bk if it has not been garbage collected, but I'm not sure there exists one such tool.
----
2019-04-04 07:49:56 UTC - bossbaby: I made sure to set the topic's retention to -1
----
2019-04-04 07:50:23 UTC - bossbaby: Can bookie's recovery function recover it?
----
2019-04-04 07:53:36 UTC - Guangzhong Yao: then the data won't lost in my opinion. I don't know why ledger 55 is not one valid ledger in the topic ledger's list.
----
2019-04-04 07:56:13 UTC - Guangzhong Yao: can you check the broker log to make sure ledger 55 indeed has data? then you can decide next step.
+1 : bossbaby
----