You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@accumulo.apache.org by GitBox <gi...@apache.org> on 2019/09/19 14:23:28 UTC

[GitHub] [accumulo] keith-turner commented on a change in pull request #1366: Fix #1365 2.1 Upgrade processing for #1043 ~del

keith-turner commented on a change in pull request #1366: Fix #1365 2.1 Upgrade processing for #1043 ~del
URL: https://github.com/apache/accumulo/pull/1366#discussion_r326203124
 
 

 ##########
 File path: server/master/src/main/java/org/apache/accumulo/master/upgrade/Upgrader9to10.java
 ##########
 @@ -352,4 +372,87 @@ MetadataTime computeRootTabletTime(ServerContext context, Collection<String> goo
     }
   }
 
+  static public void upgradeFileDeletes(ServerContext ctx, Ample.DataLevel level) {
+
+    String tableName = level.metaTable();
+    AccumuloClient c = ctx;
+
+    // find all deletes
+    try (BatchWriter writer = c.createBatchWriter(tableName, new BatchWriterConfig())) {
+      String continuePoint = "";
+      boolean stillDeletes = true;
+
+      while (stillDeletes) {
+        List<String> deletes = new ArrayList<>();
+        log.info("looking for candidates");
+        stillDeletes = getOldCandidates(ctx, tableName, continuePoint, deletes);
+        log.info("found {} deletes to upgrade", deletes.size());
+        for (String olddelete : deletes) {
+          // create new formatted delete
+          writer.addMutation(upgradeDeleteMutation(olddelete));
+        }
+        writer.flush();
+
+        // if nothing thrown then we're good so mark all deleted
+        for (String olddelete : deletes) {
+          writer.addMutation(deleteOldDeleteMutation(olddelete));
+        }
+        writer.flush();
+
+        // give it some time for memory to clean itself up if needed
+        sleepUninterruptibly(5, TimeUnit.SECONDS);
+        continuePoint = deletes.get(deletes.size() - 1);
+        log.debug("continuing from {}", continuePoint);
+      }
+    } catch (Exception e) {
+      ;
+    }
+  }
+
+  static boolean getOldCandidates(ServerContext ctx, String tableName, String continuePoint,
+      List<String> result) throws TableNotFoundException {
+
+    Range range = MetadataSchema.DeletesSection.getRange();
+    if (continuePoint != null && !continuePoint.isEmpty()) {
+      String continueRow = OLD_DELETE_PREFIX + continuePoint;
+      range = new Range(new Key(continueRow).followingKey(PartialKey.ROW), true, range.getEndKey(),
+          range.isEndKeyInclusive());
+    }
+
+    Scanner scanner = ctx.createScanner(tableName, Authorizations.EMPTY);
+    scanner.setRange(range);
+
+    // find old candidates for deletion; chop off the prefix
+    for (Map.Entry<Key,Value> entry : scanner) {
+      if (!entry.getValue().toString().equals(UPGRADED)) {
 
 Review comment:
   I really like this approach of setting something in the value.  I had hoped everything related to this change could be isolated to the upgrade code, but not that I am looking at the code I see that it can't.  While the upgrade code is running two other processes could also happen concurrently.   The Accumulo GC may be running and Metadata tablets could be compacting.  The Accumulo GC would see the old and new del markers.  Metadata table compactions could write out new delete markers which this code could corrupt.
   
   One possible solution to this would be to always put something in the value for new style delete markers, which you mentioned doing in #1344.  Then AMPLE code could always ignore old style and new delete markers written out during upgrade would not be modified.   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services