You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@pulsar.apache.org by Apache Pulsar Slack <ap...@gmail.com> on 2019/09/18 09:11:03 UTC

Slack digest for #general - 2019-09-18

2019-09-17 12:16:39 UTC - Ravi Shah: How to run standalone pulsar with TLS?
----
2019-09-17 12:43:42 UTC - Kirill Merkushev: <https://github.com/bsideup/liiklus/blob/master/plugins/pulsar-records-storage/src/test/java/com/github/bsideup/liiklus/pulsar/container/PulsarTlsContainer.java#L19> maybe could help
----
2019-09-17 14:06:01 UTC - Cory Davenport: So the managedLedgerMaxLedgerRolloverTimeMinutes is set to 4 hours. This allows the ledger to be rolled over more frequently.

What about the managedLedgerOffloadDeletionLagMS. This is also set to 4 hours. Will this feature work even without a storage system being setup for offloading?

I checked against pulsar today after about 12+ hours of it running. I was still able to reach all the messages sent the day before. Meaning the messages were not deleted.

I have moved managedLedgerMaxLedgerRolloverTimeMinutes and managedLedgerOffloadDeletionLagMs to 5 minutes each just to see if there is a difference that occurs. So far no change.
----
2019-09-17 15:15:06 UTC - David Kjerrumgaard: @Ravi Shah You will need to follow the instructions on the Pulsar web site, <http://pulsar.apache.org/docs/en/security-tls-transport/>  but make changes to the `standalone.conf` file instead of the `broker.conf` file.
----
2019-09-17 15:15:56 UTC - David Kjerrumgaard: You will also need to make changes in your `conf/client.conf` file as well for the CLI tools to work.  HTH
----
2019-09-17 15:46:33 UTC - Joseph Stanton8558: @Joseph Stanton8558 has joined the channel
----
2019-09-17 17:17:54 UTC - Kirill Merkushev: Hello, found possible client leak with `PersistentAcknowledgmentsGroupingTracker` and `exclusive` subscription. As far as I can see it closes only in case you close the consumer, but with exclusive subscription you can get Busy exception which marks consumer as in failed state on `subscribeAsync` and don’t allow to close that actually afterwards, since throwing an exception. But `PersistentAcknowledgmentsGroupingTracker` keeps a ref to ConsumerImpl as scheduling for the ack flush happens right in the constructor. Disabling this with `.acknowledgmentGroupTime(0, TimeUnit.SECONDS)` seems fixing the issue, but am I missing something?
----
2019-09-17 17:19:24 UTC - Matteo Merli: So, when you’re subscribing and getting an error, the tracker task is started and never cancelled?
----
2019-09-17 17:20:04 UTC - Kirill Merkushev: yep
----
2019-09-17 17:21:12 UTC - Kirill Merkushev: with simple retry on subscribe in 100ms and 3 consumers with the same name I’ve got 80k ConsumerImpl instances in 10 min
----
2019-09-17 17:22:15 UTC - Matteo Merli: Got it.. yes, the tracker should be either created later (only on success) or closed when the subscribe op is failed
----
2019-09-17 17:22:17 UTC - Kirill Merkushev: can share heap dump if needed
----
2019-09-17 17:22:44 UTC - Kirill Merkushev: or create an issue if that would be helpful
----
2019-09-17 17:33:03 UTC - Matteo Merli: I created a PR <https://github.com/apache/pulsar/pull/5204>
----
2019-09-17 19:22:01 UTC - Kirill Merkushev: wow, thats fast :slightly_smiling_face: thanks!
----
2019-09-17 19:22:24 UTC - Ravi Shah: @David Kjerrumgaard Can you please tell me what are the equivalent keys for following keys inside standalone.conf.
tlsEnabled=true
tlsCertificateFilePath=/path/to/broker.cert.pem
tlsKeyFilePath=/path/to/broker.key-pk8.pem
tlsTrustCertsFilePath=/path/to/ca.cert.pem

Or should i put same keys inside standalone.conf?
----
2019-09-17 19:23:41 UTC - David Kjerrumgaard: The property names (keys) are the same
----
2019-09-17 19:28:09 UTC - Ravi Shah: I am getting Got exception TooLongFrameException : Adjusted frame length exceeds 5253120: 369295620 - discarded while connection consumer after applying TLS. Consumer connects successfully when i remove TLS config. Any idea?
----
2019-09-17 19:28:15 UTC - Ravi Shah: @David Kjerrumgaard
----
2019-09-17 20:04:00 UTC - David Kjerrumgaard: That message use generated by the Netty when you send a message payload that exceeds the TCP frame length.  How big are the messages you are trying to send?
----
2019-09-17 20:18:15 UTC - Ravi Shah: i am just sending "test"
----
2019-09-17 20:19:58 UTC - Ravi Shah: and when i tries with cli producer client it show following error on producer client
"Error during handshake"
----
2019-09-17 20:20:30 UTC - Ravi Shah: <https://github.com/apache/pulsar/issues/3981>
----
2019-09-17 20:20:38 UTC - Ravi Shah: I am facing this same issue
----
2019-09-17 20:20:40 UTC - Ravi Shah: @David Kjerrumgaard
----
2019-09-17 20:24:16 UTC - David Kjerrumgaard: The issue you reference was caused by the client connecting to a non-secured port, e.g 6650 instead of 6651.
----
2019-09-17 20:24:26 UTC - David Kjerrumgaard: Which port are you connecting to?
----
2019-09-17 20:27:53 UTC - Ravi Shah: I am connecting to 6651
----
2019-09-17 20:31:32 UTC - Ravi Shah: I can share configs with you. If can check it.
----
2019-09-17 20:32:27 UTC - Ravi Shah: @David Kjerrumgaard
----
2019-09-17 20:32:46 UTC - David Kjerrumgaard: Sure
----
2019-09-17 23:12:34 UTC - Sijie Guo: &gt; Will this feature work even without a storage system being setup for offloading?


this setting is only applied to tiered storage.

&gt;  I was still able to reach all the messages sent the day before. Meaning the messages were not deleted.

do you have any subscriptions alive on that topic? The retention policy is only applied to the messages that are consumed by all subscriptions.

In order to debug this, can you please use “pulsar-admin topic stats-internal” to get the internal stats of the topic to see why the retention doesn’t work.
----
2019-09-18 00:12:41 UTC - Devin G. Bost: What headers must we provide to POST to the REST Admin endpoint for function updates?
----