You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@bookkeeper.apache.org by "hangc0276 (via GitHub)" <gi...@apache.org> on 2023/04/29 07:33:56 UTC

[GitHub] [bookkeeper] hangc0276 opened a new pull request, #3940: Improve compaction performance

hangc0276 opened a new pull request, #3940:
URL: https://github.com/apache/bookkeeper/pull/3940

   ### Motivation
   When the bookie triggers compaction, both Transaction compactor and EntryLogCompactor, the write throughput becomes jittery. 
   <img width="853" alt="image" src="https://user-images.githubusercontent.com/5436568/235288456-482bc1c2-3511-48a3-a698-f70bf93c20dd.png">
   <img width="847" alt="image" src="https://user-images.githubusercontent.com/5436568/235288468-d51fe344-ed10-4ef9-ad7f-62fa83577a7b.png">
   <img width="843" alt="image" src="https://user-images.githubusercontent.com/5436568/235288476-e723b87a-e600-457e-af44-5121d3a4204b.png">
   
   I take a deep look at the logs and found during compacting the entry log 491 in transaction compaction, there are Write Cache operations and roll new entry log files.
   ```
   2023-04-26T06:14:16,356+0000 [GarbageCollectorThread-6-1] INFO  org.apache.bookkeeper.bookie.TransactionalEntryLogCompactor - Compacting entry log 491 with usage 8.796982355448682E-6.
   2023-04-26T06:14:16,356+0000 [GarbageCollectorThread-6-1] INFO  org.apache.bookkeeper.bookie.EntryLoggerAllocator - Created new entry log file /pulsar/data/bookkeeper/ledgers-0/current/5d3.log.compacting for logId 1491.
   2023-04-26T06:14:17,198+0000 [bookie-io-8-2] INFO  org.apache.bookkeeper.bookie.storage.ldb.SingleDirectoryDbLedgerStorage - Write cache is full, triggering flush
   2023-04-26T06:14:19,030+0000 [bookie-io-8-2] INFO  org.apache.bookkeeper.bookie.storage.ldb.SingleDirectoryDbLedgerStorage - Write cache is full, triggering flush
   2023-04-26T06:14:19,806+0000 [db-storage-4-1] INFO  org.apache.bookkeeper.bookie.EntryLogManagerBase - Creating a new entry log file : createNewLog = false, reachEntryLogLimit = true
   2023-04-26T06:14:19,806+0000 [db-storage-4-1] INFO  org.apache.bookkeeper.bookie.EntryLogManagerBase - Flushing entry logger 1488 back to filesystem, pending for syncing entry loggers : [BufferedChannel{logId=1488, logFile
   =/pulsar/data/bookkeeper/ledgers-0/current/5d0.log, ledgerIdAssigned=-1}].
   2023-04-26T06:14:19,807+0000 [pool-4-thread-1] INFO  org.apache.bookkeeper.bookie.EntryLoggerAllocator - Created new entry log file /pulsar/data/bookkeeper/ledgers-0/current/5d4.log for logId 1492.
   2023-04-26T06:14:19,968+0000 [db-storage-4-1] INFO  org.apache.bookkeeper.bookie.EntryLogManagerForSingleEntryLog - Synced entry logger 1488 to disk.
   2023-04-26T06:14:20,971+0000 [bookie-io-8-2] INFO  org.apache.bookkeeper.bookie.storage.ldb.SingleDirectoryDbLedgerStorage - Write cache is full, triggering flush
   2023-04-26T06:14:22,999+0000 [bookie-io-8-2] INFO  org.apache.bookkeeper.bookie.storage.ldb.SingleDirectoryDbLedgerStorage - Write cache is full, triggering flush
   2023-04-26T06:14:24,960+0000 [bookie-io-8-2] INFO  org.apache.bookkeeper.bookie.storage.ldb.SingleDirectoryDbLedgerStorage - Write cache is full, triggering flush
   2023-04-26T06:14:27,109+0000 [bookie-io-8-1] INFO  org.apache.bookkeeper.bookie.storage.ldb.SingleDirectoryDbLedgerStorage - Write cache is full, triggering flush
   2023-04-26T06:14:29,204+0000 [bookie-io-8-1] INFO  org.apache.bookkeeper.bookie.storage.ldb.SingleDirectoryDbLedgerStorage - Write cache is full, triggering flush
   2023-04-26T06:14:32,569+0000 [bookie-io-8-1] INFO  org.apache.bookkeeper.bookie.storage.ldb.SingleDirectoryDbLedgerStorage - Write cache is full, triggering flush
   2023-04-26T06:14:32,610+0000 [BookieJournal-3181] INFO  org.apache.bookkeeper.bookie.JournalChannel - Opening journal /pulsar/data/bookkeeper/journal-0/current/187bb72bb20.txn
   2023-04-26T06:14:35,383+0000 [GarbageCollectorThread-6-1] INFO  org.apache.bookkeeper.bookie.DefaultEntryLogger - Flushed compaction log file /pulsar/data/bookkeeper/ledgers-0/current/5d3.log.compacting with logId 1491.
   2023-04-26T06:14:36,265+0000 [GarbageCollectorThread-6-1] INFO  org.apache.bookkeeper.bookie.EntryLogManagerBase - Creating a new entry log file : createNewLog = false, reachEntryLogLimit = true
   2023-04-26T06:14:36,266+0000 [GarbageCollectorThread-6-1] INFO  org.apache.bookkeeper.bookie.EntryLogManagerBase - Flushing entry logger 1490 back to filesystem, pending for syncing entry loggers : [BufferedChannel{logId=1490, logFile=/pulsar/data/bookkeeper/ledgers-0/current/5d2.log, ledgerIdAssigned=-1}].
   2023-04-26T06:14:36,266+0000 [pool-4-thread-1] INFO  org.apache.bookkeeper.bookie.EntryLoggerAllocator - Created new entry log file /pulsar/data/bookkeeper/ledgers-0/current/5d5.log for logId 1493.
   2023-04-26T06:14:36,362+0000 [GarbageCollectorThread-6-1] INFO  org.apache.bookkeeper.bookie.EntryLogManagerForSingleEntryLog - Synced entry logger 1490 to disk.
   2023-04-26T06:14:36,720+0000 [bookie-io-8-1] INFO  org.apache.bookkeeper.bookie.storage.ldb.SingleDirectoryDbLedgerStorage - Write cache is full, triggering flush
   2023-04-26T06:14:36,970+0000 [GarbageCollectorThread-6-1] INFO  org.apache.bookkeeper.bookie.Journal - garbage collected journal 187bb72bb1f.txn
   2023-04-26T06:14:37,498+0000 [GarbageCollectorThread-6-1] INFO  org.apache.bookkeeper.bookie.GarbageCollectorThread - Removing entry log metadata for 491
   2023-04-26T06:14:37,498+0000 [GarbageCollectorThread-6-1] INFO  org.apache.bookkeeper.bookie.TransactionalEntryLogCompactor - Compacted entry log : 491.
   ```
   
   Let's take a look at the compaction steps first before analyzing the root cause.
   
   #### Transaction Compactor
   - Step1: Create a new entry log file with `compacting` suffix
   - Step2: Open the original entry log file and read the ledgers one by one, filter out those deleted ledgers, and write the remaining ledgers into the **newly created entry log file**
   - Step3: Flush the newly created entry log file
   - **Step4: Flush the current ledger storage (Trigger a new checkpoint)** https://github.com/apache/bookkeeper/blob/c765aea600fde1a8ac7cb8a53a1157a372026894/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/storage/ldb/SingleDirectoryDbLedgerStorage.java#L947
   - Step5: Update the entries' new index in the newly created entry log file into the RocksDB
   - Step6: Remove the `compacting` suffix
   - Step7: Delete the original entry log file and remove it from the entryLogMetaMap
   
   #### EntryLog Compactor
   - Step1: Open the original entry log file and read the ledgers one by one, filter out those deleted ledgers, and write the remaining ledgers into the **current writing entry log file**
   - Step2: Flush the current writing entry log file to ensure those written ledgers are flushed into the disk
   - **Step3: Flush the current ledger storage (Trigger a new checkpoint)** https://github.com/apache/bookkeeper/blob/c765aea600fde1a8ac7cb8a53a1157a372026894/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/storage/ldb/SingleDirectoryDbLedgerStorage.java#L947
   - Step4: Update the entries' new index in the current entry log file into the RocksDB
   - Step5: Delete the original entry log file and remove it from the entryLogMetaMap
   
   For Step4 in Transaction Compactor and Step3 in EntryLog Compactor, it triggers a new checkpoint and makes the write cache switch no matter the write cache store how much data.
   
   However, looking back at the whole transaction steps, ` Flush the current ledger storage` may want to ensure the remaining data has been flushed into the entry log file before updating the entres' index into RocksDB. But both `Transaction Compactor` and `EntryLog Compactor` are triggered flush operations to ensure those remaining data are flushed into the entry log file. So I think the `Flush the current ledger storage` step is unnecessary and it will bring more throughput impact.
   
   
   ### Changes
   Remove the flush operation in the `SingleDirectoryDbLedgerStorage#updateEntriesLocations` operation.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@bookkeeper.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [bookkeeper] hangc0276 commented on a diff in pull request #3940: Improve compaction performance

Posted by "hangc0276 (via GitHub)" <gi...@apache.org>.
hangc0276 commented on code in PR #3940:
URL: https://github.com/apache/bookkeeper/pull/3940#discussion_r1182024566


##########
bookkeeper-server/src/test/java/org/apache/bookkeeper/bookie/storage/ldb/DbLedgerStorageTest.java:
##########
@@ -224,6 +224,7 @@ public void testBookieCompaction() throws Exception {
         entry3.writeLong(3); // entry id
         entry3.writeBytes("entry-3".getBytes());
         storage.addEntry(entry3);
+        storage.flush();

Review Comment:
   > Let me take one case for example.
   Timeline: [t1, t2, t3]
   t1: One entry (LedgerId = 1, EntryId = 1) with value "test-v1" written into the bookie from one bookie client
   t2: The entry (LedgerId = 1, EntryId = 1) with new value "test-new-v1" written into the bookie from one bookie client
   t3: Bookie triggers compaction to compact the entry written in t1, the entry (LedgerId = 1, EntryId = 1) with value "test-v1" will be written into the current entry log file, and update the entry's lookup index.
   
   > Before this change
   Due to the compactor flushing the current ledger storage write cache before updating the entry's lookup index, the updated value of the entry "test-new-v1" in t2 will be flushed into storage and removed from the write cache. If we get the entry (LedgerId = 1, EntryId = 1) after t3, we will get the old value test-v1
   
   This test shows the above behavior



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@bookkeeper.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [bookkeeper] hangc0276 commented on a diff in pull request #3940: Improve compaction performance

Posted by "hangc0276 (via GitHub)" <gi...@apache.org>.
hangc0276 commented on code in PR #3940:
URL: https://github.com/apache/bookkeeper/pull/3940#discussion_r1182024031


##########
bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/storage/ldb/SingleDirectoryDbLedgerStorage.java:
##########
@@ -943,9 +943,6 @@ public Iterable<Long> getActiveLedgersInRange(long firstLedgerId, long lastLedge
 
     @Override
     public void updateEntriesLocations(Iterable<EntryLocation> locations) throws IOException {
-        // Trigger a flush to have all the entries being compacted in the db storage
-        flush();
-
         entryLocationIndex.updateLocations(locations);

Review Comment:
   Looking back at the whole transaction steps,  Flush the current ledger storage write cache may want to ensure the remaining data has been flushed into the entry log file before updating the entres' index into RocksDB. But both Transaction Compactor and EntryLog Compactor are triggered flush operations to ensure those remaining data are flushed into the entry log file. So I think the Flush of the current ledger storage step is unnecessary and it will bring more throughput impact.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@bookkeeper.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [bookkeeper] eolivelli commented on a diff in pull request #3940: Improve compaction performance

Posted by "eolivelli (via GitHub)" <gi...@apache.org>.
eolivelli commented on code in PR #3940:
URL: https://github.com/apache/bookkeeper/pull/3940#discussion_r1182194992


##########
bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/storage/ldb/SingleDirectoryDbLedgerStorage.java:
##########
@@ -943,9 +943,6 @@ public Iterable<Long> getActiveLedgersInRange(long firstLedgerId, long lastLedge
 
     @Override
     public void updateEntriesLocations(Iterable<EntryLocation> locations) throws IOException {
-        // Trigger a flush to have all the entries being compacted in the db storage
-        flush();
-
         entryLocationIndex.updateLocations(locations);

Review Comment:
   Maybe we should update the comments at here:
   https://github.com/apache/bookkeeper/blob/f5455f01584b1b0a592f020eed49d3cb774da0a9/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/storage/ldb/SingleDirectoryDbLedgerStorage.java#L1040



##########
bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/storage/ldb/SingleDirectoryDbLedgerStorage.java:
##########
@@ -943,9 +943,6 @@ public Iterable<Long> getActiveLedgersInRange(long firstLedgerId, long lastLedge
 
     @Override
     public void updateEntriesLocations(Iterable<EntryLocation> locations) throws IOException {
-        // Trigger a flush to have all the entries being compacted in the db storage
-        flush();
-
         entryLocationIndex.updateLocations(locations);

Review Comment:
   For the standard compactor we are calling `EntryLogger.flush() `here before calling `updateEntriesLocations`
   https://github.com/apache/bookkeeper/blob/405e72acf42bb1104296447ea8840d805094c787/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/EntryLogCompactor.java#L120
   
   My understanding is that with this change we are not flushing the index on RocksDB before actually updating the locations and we will wait for the next flush to happen.
   
   In this [place](https://github.com/apache/bookkeeper/blob/ceba60565cf7cb438e9be4ab7416a2808b9168a1/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/storage/ldb/EntryLocationIndex.java#L183) we flush the batch of writes related to the new entry locations passed to `updateEntriesLocations`
   
   I wonder if we could risk to write these updates and then at the next flush overwrite the locations for the same entries with old data accumulated in `writeCacheBeingFlushed` https://github.com/apache/bookkeeper/blob/f5455f01584b1b0a592f020eed49d3cb774da0a9/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/storage/ldb/SingleDirectoryDbLedgerStorage.java#L817 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@bookkeeper.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [bookkeeper] horizonzy commented on a diff in pull request #3940: Improve compaction performance

Posted by "horizonzy (via GitHub)" <gi...@apache.org>.
horizonzy commented on code in PR #3940:
URL: https://github.com/apache/bookkeeper/pull/3940#discussion_r1184696372


##########
bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/storage/ldb/SingleDirectoryDbLedgerStorage.java:
##########
@@ -943,9 +943,6 @@ public Iterable<Long> getActiveLedgersInRange(long firstLedgerId, long lastLedge
 
     @Override
     public void updateEntriesLocations(Iterable<EntryLocation> locations) throws IOException {
-        // Trigger a flush to have all the entries being compacted in the db storage
-        flush();
-
         entryLocationIndex.updateLocations(locations);

Review Comment:
   Agree with Hang.
   
   > If t3 happens when t2's updated data has been flushed into the entry log file and removed from the write cache, the old entry written in t3 will override the new entry written in t2. When we get the entry (LedgerId = 1, EntryId=1), we will get the old value test-v1.
   
   In this case. We should check if the remaining data's new location is overridden by the client incoming entries. 
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@bookkeeper.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [bookkeeper] hangc0276 commented on a diff in pull request #3940: Improve compaction performance

Posted by "hangc0276 (via GitHub)" <gi...@apache.org>.
hangc0276 commented on code in PR #3940:
URL: https://github.com/apache/bookkeeper/pull/3940#discussion_r1184476716


##########
bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/storage/ldb/SingleDirectoryDbLedgerStorage.java:
##########
@@ -943,9 +943,6 @@ public Iterable<Long> getActiveLedgersInRange(long firstLedgerId, long lastLedge
 
     @Override
     public void updateEntriesLocations(Iterable<EntryLocation> locations) throws IOException {
-        // Trigger a flush to have all the entries being compacted in the db storage
-        flush();
-
         entryLocationIndex.updateLocations(locations);

Review Comment:
   In fact, the data accumulated in `writeCacheBeingFlushed` is new data, because we are compacting the old data.
   
   Refer to:
   > #### Behavior Change
   > After we removed the `flush the current ledger storage write cache` step, it brings one behavior change. https://github.com/apache/bookkeeper/blob/c765aea600fde1a8ac7cb8a53a1157a372026894/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/storage/ldb/SingleDirectoryDbLedgerStorage.java#L947
   > Let me take one case for example.
   > Timeline: [t1, t2, t3]
   t1: One entry (LedgerId = 1, EntryId = 1) with value "test-v1" written into the bookie from one bookie client
   t2: The entry (LedgerId = 1, EntryId = 1) with new value "test-new-v1" written into the bookie from one bookie client
   t3: Bookie triggers compaction to compact the entry written in t1, the entry (LedgerId = 1, EntryId = 1) with value "test-v1" will be written into the current entry log file, and update the entry's lookup index.
   > ##### Before this change
   > Due to the compactor flushing the current ledger storage write cache before updating the entry's lookup index, the updated value of the entry "test-new-v1" in `t2` will be flushed into storage and removed from the write cache. If we get the entry (LedgerId = 1, EntryId = 1) after `t3`, we will get the old value `test-v1`
   > ##### After this change
   > Due to this PR removed the compactor flushing the current ledger storage write cache, it has two cases:
   > - If `t3` happens when `t2`'s updated data is still located in the ledger storage's write cache, the new data updated in `t2` will override the old data written in t3. When we get the entry (LedgerId = 1, EntryId = 1), we will get the new value `test-new-v1`
   > - If `t3` happens when `t2`'s updated data has been flushed into the entry log file and removed from the write cache, the old entry written in `t3` will override the new entry written in `t2`. When we get the entry (LedgerId = 1, EntryId=1), we will get the old value `test-v1`.
   > IMO, we should always return the new value `test-new-v1`, not the old value `test-v1`. If we need to make sure getting the entry always returns the new value, we need more checks in writing the old value in the compaction stage.
   > In Pulsar's general case, updating the entry's value won't happen.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@bookkeeper.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [bookkeeper] dlg99 commented on a diff in pull request #3940: Improve compaction performance

Posted by "dlg99 (via GitHub)" <gi...@apache.org>.
dlg99 commented on code in PR #3940:
URL: https://github.com/apache/bookkeeper/pull/3940#discussion_r1181742865


##########
bookkeeper-server/src/test/java/org/apache/bookkeeper/bookie/storage/ldb/DbLedgerStorageTest.java:
##########
@@ -224,6 +224,7 @@ public void testBookieCompaction() throws Exception {
         entry3.writeLong(3); // entry id
         entry3.writeBytes("entry-3".getBytes());
         storage.addEntry(entry3);
+        storage.flush();

Review Comment:
   do these changes in test reflect actual behavior in production code path?



##########
bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/storage/ldb/SingleDirectoryDbLedgerStorage.java:
##########
@@ -943,9 +943,6 @@ public Iterable<Long> getActiveLedgersInRange(long firstLedgerId, long lastLedge
 
     @Override
     public void updateEntriesLocations(Iterable<EntryLocation> locations) throws IOException {
-        // Trigger a flush to have all the entries being compacted in the db storage
-        flush();
-
         entryLocationIndex.updateLocations(locations);

Review Comment:
   How this change affects data consistency in case of e.g. node crash after updateLocations()?
   As I understand, bookie can end up with index persisted and pointing to the data that hasn't been persisted. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@bookkeeper.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [bookkeeper] wenbingshen commented on a diff in pull request #3940: Improve compaction performance

Posted by "wenbingshen (via GitHub)" <gi...@apache.org>.
wenbingshen commented on code in PR #3940:
URL: https://github.com/apache/bookkeeper/pull/3940#discussion_r1185679670


##########
bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/storage/ldb/SingleDirectoryDbLedgerStorage.java:
##########
@@ -943,9 +943,6 @@ public Iterable<Long> getActiveLedgersInRange(long firstLedgerId, long lastLedge
 
     @Override
     public void updateEntriesLocations(Iterable<EntryLocation> locations) throws IOException {
-        // Trigger a flush to have all the entries being compacted in the db storage
-        flush();
-
         entryLocationIndex.updateLocations(locations);

Review Comment:
   > Agree with Hang.
   
   +1 
   
   > > If t3 happens when t2's updated data has been flushed into the entry log file and removed from the write cache, the old entry written in t3 will override the new entry written in t2. When we get the entry (LedgerId = 1, EntryId=1), we will get the old value test-v1.
   > 
   > In this case. We should check if the remaining data's new location is overridden by the client incoming entries.
   
   In this case, we should compare whether the location of the entry in the compression entrylog is consistent with the location in the current EntryLocationIndex?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@bookkeeper.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [bookkeeper] hangc0276 commented on pull request #3940: Improve compaction performance

Posted by "hangc0276 (via GitHub)" <gi...@apache.org>.
hangc0276 commented on PR #3940:
URL: https://github.com/apache/bookkeeper/pull/3940#issuecomment-1556415932

   The new solution https://github.com/apache/bookkeeper/pull/3959 can fix one corner case, and close this PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@bookkeeper.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [bookkeeper] hangc0276 closed pull request #3940: Improve compaction performance

Posted by "hangc0276 (via GitHub)" <gi...@apache.org>.
hangc0276 closed pull request #3940: Improve compaction performance
URL: https://github.com/apache/bookkeeper/pull/3940


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@bookkeeper.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org