You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by GitBox <gi...@apache.org> on 2021/05/07 12:41:01 UTC

[GitHub] [ozone] adoroszlai commented on a change in pull request #2196: HDDS-5149 when source datanode download container tar from target datanode,but the target datanode container file missing,import error

adoroszlai commented on a change in pull request #2196:
URL: https://github.com/apache/ozone/pull/2196#discussion_r628170059



##########
File path: hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/TarContainerPacker.java
##########
@@ -152,6 +156,20 @@ public void pack(Container<KeyValueContainerData> container,
     try (OutputStream compressed = compress(output);
          ArchiveOutputStream archiveOutput = tar(compressed)) {
 
+      if (!containerData.getDbFile().exists()) {
+        LOG.warn("DBfile {} not exist",
+            containerData.getDbFile().toPath().toString());
+        return;
+      } else if (!new File(containerData.getChunksPath()).exists()) {
+        LOG.warn("Chunkfile {} not exist",
+            containerData.getDbFile().toPath().toString());
+        return;
+      } else if (!container.getContainerFile().exists()) {
+        LOG.warn("Containerfile {} not exist",
+            containerData.getDbFile().toPath().toString());
+        return;
+      }
+

Review comment:
       Would it make sense to utilize `Container.scanMetadata()` instead?  More complete checks without introducing duplication.

##########
File path: hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/DownloadAndImportReplicator.java
##########
@@ -121,10 +121,17 @@ public void replicate(ReplicationTask task) {
         LOG.info("Container {} is downloaded with size {}, starting to import.",
                 containerID, bytes);
         task.setTransferredBytes(bytes);
-
-        importContainer(containerID, path);
-        LOG.info("Container {} is replicated successfully", containerID);
-        task.setStatus(Status.DONE);
+        // if tar is null, the tar size is 45 bytes
+        if (bytes <= 45) {
+          task.setStatus(Status.FAILED);
+          LOG.warn("Container {} is downloaded with size {}, " +
+              "if size less than 45 bytes the tar file is null",
+              containerID, bytes);

Review comment:
       I think these kind of tar-specific details should be handled by `TarContainerPacker`.

##########
File path: hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueContainer.java
##########
@@ -238,6 +239,55 @@ public void testContainerImportExport() throws Exception {
     }
   }
 
+  @Test
+  public void testContainerMissingFileImportExport() throws Exception {
+    long containerId = keyValueContainer.getContainerData().getContainerID();
+    createContainer();
+    long numberOfKeysToWrite = 12;
+    closeContainer();
+    populate(numberOfKeysToWrite);
+
+    //destination path
+    File folderToExport = folder.newFile("exported.tar.gz");
+    TarContainerPacker packer = new TarContainerPacker();
+
+    //if missing chunksfile
+    File chunkfile = new File(keyValueContainer.
+        getContainerData().getChunksPath());
+    Files.delete(chunkfile.toPath());
+    Assert.assertFalse(chunkfile.exists());
+    //export the container
+    try (FileOutputStream fos = new FileOutputStream(folderToExport)) {
+      keyValueContainer
+          .exportContainerData(fos, packer);
+    }
+
+    //delete the original one
+    keyValueContainer.delete();
+
+    //create a new one
+    KeyValueContainerData containerData =
+        new KeyValueContainerData(containerId,
+            keyValueContainerData.getLayOutVersion(),
+            keyValueContainerData.getMaxSize(), UUID.randomUUID().toString(),
+            datanodeId.toString());
+    KeyValueContainer container = new KeyValueContainer(containerData, CONF);
+
+    HddsVolume containerVolume = volumeChoosingPolicy.chooseVolume(volumeSet
+        .getVolumesList(), 1);
+    String hddsVolumeDir = containerVolume.getHddsRootDir().toString();
+
+    container.populatePathFields(scmId, containerVolume, hddsVolumeDir);
+    long bytes = Files.size(folderToExport.toPath());
+    Assert.assertTrue(bytes <= 45);
+
+    try (FileInputStream fis = new FileInputStream(folderToExport)) {
+      container.importContainerData(fis, packer);
+    } catch (Exception ex) {
+      assertTrue(ex instanceof NullPointerException);

Review comment:
       * This passes even if `importContainerData` does not throw, which is not expected.
   * Some exception other than `NullPointerException`  may better describe the error.
   * `Assert.assertThrows` or `LambdaTestUtils.intercept` could provide more details if the assertion fails (eg. if not the expected kind of exception was thrown).




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org