You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@pulsar.apache.org by Apache Pulsar Slack <ap...@gmail.com> on 2018/01/27 23:19:05 UTC

Slack digest for #general - 2018-01-27

2018-01-26 23:21:06 UTC - Jaebin Yoon: ah you should go through one by one with that, then.
----
2018-01-26 23:21:24 UTC - Matteo Merli: yes, that’s not super-convenient
----
2018-01-26 23:22:06 UTC - Matteo Merli: that’s why I’m hinting the easiest way is to pre-provision based on exepcted topics/loads
----
2018-01-26 23:22:11 UTC - Matteo Merli: :slightly_smiling_face:
----
2018-01-26 23:22:14 UTC - Jaebin Yoon: ok. the simple script can do that but yeah it's not the most convenient. way.
----
2018-01-26 23:27:34 UTC - Matteo Merli: yes, though it should be easy to extend it in the script to, let say split all bundles, or something like “split until there are at least 200 bundles”
----
2018-01-26 23:27:52 UTC - Matteo Merli: script --&gt; `pulsar-admin` CLI tool
----
2018-01-26 23:33:09 UTC - Jaebin Yoon: oh interesting. if I modify the policy after creating namespace, it overwrites the bundle setup.
```pulsar-admin namespaces create $NS --bundles 300
pulsar-admin namespaces set-persistence -e 2 -w 2 -a 1 -r 0 $NS```
The set-persistence overwrite the bundles with default 4 bundles.
----
2018-01-26 23:36:59 UTC - Matteo Merli: ok, that might be a bit confusing, though the info about bundles is being tracked per-cluster, rather than globally. 

The policies for a namespace are shared across all clusters in different regions (eg: stored in global zk )

The information about the  bundles split are tracked locally, so that each broker can take a local decision to split a given bundle, without coordinating with the rest of clusters
----
2018-01-26 23:38:33 UTC - Matteo Merli: the bundle information, is initialized to a certain value and then is copied  locally by each cluster
----
2018-01-26 23:38:54 UTC - Matteo Merli: in local ZK, that would be under `get /admin/local-policies/$NAMESPACE`
----
2018-01-26 23:41:48 UTC - Jaebin Yoon: After creating namespace with 300 bundles, if I do "pulsar-admin namespaces policies $NS", then I see those bundles but after set-persistence, it shows 4 bundles.  What you're saying is that the bundle i created is still there?
----
2018-01-26 23:44:48 UTC - Matteo Merli: After the namespace is created with N bundles,  then all the splits are then tracked locally. Also, when the namespace starts being used in a given cluster, it will copy the “initial” config for bundles locally and will keep using that after that.
----
2018-01-26 23:45:18 UTC - Matteo Merli: Let me try using the set-persistence _before_ start using the namespace
----
2018-01-26 23:50:22 UTC - Matteo Merli: @Jaebin Yoon Just tried the same thing and it’s working for me:
----
2018-01-26 23:51:29 UTC - Matteo Merli: @Matteo Merli uploaded a file: <https://apache-pulsar.slack.com/files/U680ZCXA5/F8ZH29L02/-.js|Untitled>
----
2018-01-26 23:52:56 UTC - Jaebin Yoon: Oh that's different result from mine. hmm Iet me delete the namespace and try again.
----
2018-01-26 23:59:57 UTC - Jaebin Yoon: Well.. it worked this time. not sure what happened. I'll try a couple of times to see if I can reproduce that.
----
2018-01-27 00:03:40 UTC - Jaebin Yoon: Maybe there is a delay in propagating the data.. in my environment, I potentially hit different broker for every pulsar-admin command in my env (using DNS to select any broker). I saw the default 4 bundles when I query with "pulsar-admin namespaces policies" right after I created with 300 bundles but if I query again, it shows 300. 
Anyway, it seems it's working. thanks!
----
2018-01-27 00:06:53 UTC - Matteo Merli: The delay of propagating the notification or hitting a different broker, shouldn’t affect the result
----
2018-01-27 00:06:53 UTC - Matteo Merli: <https://github.com/apache/incubator-pulsar/blob/6bb98344ca48f81aaf403372400cb38013616e9d/pulsar-broker/src/main/java/org/apache/pulsar/broker/admin/Namespaces.java#L1297>
----
2018-01-27 00:07:48 UTC - Matteo Merli: We’re validating the version of the z-node in zookeeper, so if there are concurrent updates, or delays in dispatching the watches, the 2nd update will fail
----
2018-01-27 01:56:57 UTC - Jaebin Yoon: do you have any snapshot release that I can test with?
----
2018-01-27 01:58:31 UTC - Matteo Merli: the snapshots are not currently being published in the Apache Maven repo :confused:
----
2018-01-27 02:00:03 UTC - Matteo Merli: you can build from current master
----
2018-01-27 02:12:30 UTC - Jaebin Yoon: ok. I will try one from master. I need that producer fix to test further. Whenever i have more than 100 partitions, the producers crash because of that.
----
2018-01-27 08:59:45 UTC - Jaebin Yoon: I'm getting many of these exceptions. What do these mean?
```2018-01-27 08:55:01,095 - ERROR - [BookKeeperClientWorker-18-1:LedgerHandle@845] - Closing ledger 260608 due to error -101
2018-01-27 08:55:01,095 - ERROR - [BookKeeperClientWorker-18-1:PendingAddOp@270] - Write of ledger entry to quorum failed: L260608 E137704
2018-01-27 08:55:01,109 - WARN  - [BookKeeperClientWorker-18-1:LedgerHandle$2$1CloseCb$1@398] - Conditional update ledger metadata for ledger 260608 failed.
2018-01-27 08:55:01,109 - WARN  - [BookKeeperClientWorker-18-1:LedgerHandle$NoopCloseCallback@1189] - Close failed: Bad ledger metadata version
2018-01-27 08:55:01,112 - WARN  - [BookKeeperClientWorker-19-1:PendingAddOp@228] - Fencing exception on write: L260609 E137717 on 100.85.135.220:3181
2018-01-27 08:55:01,112 - ERROR - [BookKeeperClientWorker-19-1:LedgerHandle@845] - Closing ledger 260609 due to error -101
2018-01-27 08:55:01,112 - WARN  - [BookKeeperClientWorker-22-1:LedgerHandle$3@630] - Attempt to add to closed ledger: 260609
2018-01-27 08:55:01,122 - WARN  - [BookKeeperClientWorker-19-1:LedgerHandle$2$1CloseCb$1@398] - Conditional update ledger metadata for ledger 260609 failed.
2018-01-27 08:55:01,122 - WARN  - [BookKeeperClientWorker-19-1:LedgerHandle$NoopCloseCallback@1189] - Close failed: Bad ledger metadata version```
----
2018-01-27 10:43:22 UTC - jia zhai: Seems bookkeeper add entry failed, because it find the ledger metadata version not match
----
2018-01-27 16:47:58 UTC - Jaebin Yoon: Not sure why this happens. I brought up new bookies and terminated old bookies and while I did that I lost all data. (not auto-recovery was running while I terminated old bookies). I removed old topics so that nobody can use the old ledgers. This error might related to these new bookies? I used the same AMI so nothing has changed in terms of pulsar, bookie jars.
----
2018-01-27 16:53:03 UTC - Jaebin Yoon: zookeeper was up all the time while I updated the bookies (red black way) so zookeeper contains all old ledgers but no bookies in those ledgers are not available. Since the topic for those old ledgers are gone, nobody should use that, right?
----
2018-01-27 16:53:42 UTC - Jaebin Yoon: How can I correct this kind of issues?
----
2018-01-27 17:37:44 UTC - Matteo Merli: @Jaebin Yoon Did the topics were deleted sucessfully? Are these logs getting printed in broker?
----
2018-01-27 18:40:28 UTC - Jaebin Yoon: Yes. it was deleted without error. and those errors were from the brokers.
----