You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@accumulo.apache.org by "cshannon (via GitHub)" <gi...@apache.org> on 2024/03/09 16:53:11 UTC

[PR] Update dead compaction detector to handle metadata/root [accumulo]

cshannon opened a new pull request, #4354:
URL: https://github.com/apache/accumulo/pull/4354

   This updates DeadCompactionDetector to also look at the root and metadata table metadata for dead compactions on those tables
   
   This closes #4340


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@accumulo.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [PR] Update dead compaction detector to handle metadata/root [accumulo]

Posted by "cshannon (via GitHub)" <gi...@apache.org>.
cshannon commented on code in PR #4354:
URL: https://github.com/apache/accumulo/pull/4354#discussion_r1518886863


##########
server/manager/src/main/java/org/apache/accumulo/manager/compaction/coordinator/DeadCompactionDetector.java:
##########
@@ -145,9 +146,14 @@ private void detectDeadCompactions() {
     Map<ExternalCompactionId,KeyExtent> tabletCompactions = new HashMap<>();
 
     // find what external compactions tablets think are running
-    try (TabletsMetadata tabletsMetadata = context.getAmple().readTablets().forLevel(DataLevel.USER)
-        .filter(new HasExternalCompactionsFilter()).fetch(ColumnType.ECOMP, ColumnType.PREV_ROW)
-        .build()) {
+    try (Stream<TabletMetadata> tabletsMetadata = Stream
+        // Listing the data levels vs using DataLevel.values() prevents unexpected
+        // behavior if a new DataLevel is added
+        .of(DataLevel.ROOT, DataLevel.METADATA, DataLevel.USER)
+        .map(dataLevel -> context.getAmple().readTablets().forLevel(dataLevel)

Review Comment:
   This is addressed in #4357 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@accumulo.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [PR] Update dead compaction detector to handle metadata/root [accumulo]

Posted by "cshannon (via GitHub)" <gi...@apache.org>.
cshannon merged PR #4354:
URL: https://github.com/apache/accumulo/pull/4354


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@accumulo.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [PR] Update dead compaction detector to handle metadata/root [accumulo]

Posted by "keith-turner (via GitHub)" <gi...@apache.org>.
keith-turner commented on code in PR #4354:
URL: https://github.com/apache/accumulo/pull/4354#discussion_r1520466891


##########
server/manager/src/main/java/org/apache/accumulo/manager/compaction/coordinator/CompactionCoordinator.java:
##########
@@ -710,6 +715,15 @@ public void compactionFailed(TInfo tinfo, TCredentials credentials, String exter
     compactionFailed(Map.of(ecid, KeyExtent.fromThrift(extent)));
   }
 
+  void compactionsFailed(Map<ExternalCompactionId,Pair<KeyExtent,DataLevel>> compactionsByLevel) {
+    // Need to process each level by itself because the conditional tablet mutator does not support
+    // mutating multiple data levels at the same time
+    compactionsByLevel.entrySet().stream()
+        .collect(groupingBy(entry -> entry.getValue().getSecond(),
+            Collectors.toMap(Entry::getKey, e -> e.getValue().getFirst())))
+        .forEach((level, compactions) -> compactionFailed(compactions));
+  }
+

Review Comment:
   Could avoid tracking the level in the outer code by computing it here when partitioning.  Would need to rename the function that takes a map of data for a level.  Renamed it to `compactionFailedForLevel` in the suggestion.
   
   ```suggestion
     void compactionsFailed(Map<ExternalCompactionId,KeyExtent> compactionsByLevel) {
       // Need to process each level by itself because the conditional tablet mutator does not support
       // mutating multiple data levels at the same time
       compactionsByLevel.entrySet().stream()
           .collect(groupingBy(entry -> DataLevel.of(entry.getValue().tableId()),
               Collectors.toMap(Entry::getKey, Entry::getValue)))
           .forEach((level, compactions) -> compactionFailedForLevel(compactions));
     }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@accumulo.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [PR] Update dead compaction detector to handle metadata/root [accumulo]

Posted by "keith-turner (via GitHub)" <gi...@apache.org>.
keith-turner commented on code in PR #4354:
URL: https://github.com/apache/accumulo/pull/4354#discussion_r1518637550


##########
server/manager/src/main/java/org/apache/accumulo/manager/compaction/coordinator/DeadCompactionDetector.java:
##########
@@ -145,9 +146,14 @@ private void detectDeadCompactions() {
     Map<ExternalCompactionId,KeyExtent> tabletCompactions = new HashMap<>();
 
     // find what external compactions tablets think are running
-    try (TabletsMetadata tabletsMetadata = context.getAmple().readTablets().forLevel(DataLevel.USER)
-        .filter(new HasExternalCompactionsFilter()).fetch(ColumnType.ECOMP, ColumnType.PREV_ROW)
-        .build()) {
+    try (Stream<TabletMetadata> tabletsMetadata = Stream
+        // Listing the data levels vs using DataLevel.values() prevents unexpected
+        // behavior if a new DataLevel is added
+        .of(DataLevel.ROOT, DataLevel.METADATA, DataLevel.USER)
+        .map(dataLevel -> context.getAmple().readTablets().forLevel(dataLevel)

Review Comment:
   Uncertain if this will close the Closeable returned by context.getAmple().readTablets()



##########
server/manager/src/main/java/org/apache/accumulo/manager/compaction/coordinator/DeadCompactionDetector.java:
##########
@@ -145,9 +146,14 @@ private void detectDeadCompactions() {
     Map<ExternalCompactionId,KeyExtent> tabletCompactions = new HashMap<>();
 
     // find what external compactions tablets think are running
-    try (TabletsMetadata tabletsMetadata = context.getAmple().readTablets().forLevel(DataLevel.USER)
-        .filter(new HasExternalCompactionsFilter()).fetch(ColumnType.ECOMP, ColumnType.PREV_ROW)
-        .build()) {
+    try (Stream<TabletMetadata> tabletsMetadata = Stream

Review Comment:
   A few lines up there is a TODO that can be removed



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@accumulo.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [PR] Update dead compaction detector to handle metadata/root [accumulo]

Posted by "cshannon (via GitHub)" <gi...@apache.org>.
cshannon commented on PR #4354:
URL: https://github.com/apache/accumulo/pull/4354#issuecomment-1987311097

   @keith-turner - I pushed an update which groups the compactions by data level when processing with the mutator since as you pointed out, the conditional mutator does not support multiple data levels. I also refactored the tests and I added a new test that creates dead compactions for all 3 levels at the same time to verify it works. I verified the new test failed before my change and now works with the grouping.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@accumulo.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [PR] Update dead compaction detector to handle metadata/root [accumulo]

Posted by "cshannon (via GitHub)" <gi...@apache.org>.
cshannon commented on PR #4354:
URL: https://github.com/apache/accumulo/pull/4354#issuecomment-1988331313

   I should also add that I chose to just partition the extents into buckets at the end when processing the remaining compactions that were considered as dead and passed to the mutator. It made things a lot simpler as the rest of the code didn't have to deal with 3 different buckets/groups and there should be minimal performance hit because my assumption is dead compactions should not be very common so there shouldn't be many items to have to process and partition.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@accumulo.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [PR] Update dead compaction detector to handle metadata/root [accumulo]

Posted by "cshannon (via GitHub)" <gi...@apache.org>.
cshannon commented on code in PR #4354:
URL: https://github.com/apache/accumulo/pull/4354#discussion_r1520499108


##########
server/manager/src/main/java/org/apache/accumulo/manager/compaction/coordinator/CompactionCoordinator.java:
##########
@@ -710,6 +715,15 @@ public void compactionFailed(TInfo tinfo, TCredentials credentials, String exter
     compactionFailed(Map.of(ecid, KeyExtent.fromThrift(extent)));
   }
 
+  void compactionsFailed(Map<ExternalCompactionId,Pair<KeyExtent,DataLevel>> compactionsByLevel) {
+    // Need to process each level by itself because the conditional tablet mutator does not support
+    // mutating multiple data levels at the same time
+    compactionsByLevel.entrySet().stream()
+        .collect(groupingBy(entry -> entry.getValue().getSecond(),
+            Collectors.toMap(Entry::getKey, e -> e.getValue().getFirst())))
+        .forEach((level, compactions) -> compactionFailed(compactions));
+  }
+

Review Comment:
   Thanks for the suggestion, that should simplify things. I can update that and merge later this week when I get some time, probably Wednesday.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@accumulo.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org