You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2020/07/23 21:14:08 UTC

[GitHub] [hadoop] ayushtkn commented on a change in pull request #2166: HDFS-15488. Add a command to list all snapshots for a snaphottable root with snapshot Ids.

ayushtkn commented on a change in pull request #2166:
URL: https://github.com/apache/hadoop/pull/2166#discussion_r459696176



##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
##########
@@ -2004,6 +2005,16 @@ public void renameSnapshot(String snapshotRoot, String snapshotOldName,
     return status;
   }
 
+  @Override // Client Protocol
+  public SnapshotStatus[] getSnapshotListing(String path)

Review comment:
       Keep the argument same consistent, In ClientProtocol & Router its snapshotRoot, better keep same everywhere

##########
File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
##########
@@ -2148,6 +2149,15 @@ public Void next(final FileSystem fs, final Path p)
     return dfs.getSnapshottableDirListing();
   }
 
+  /**
+   * @return all the snapshots for a snapshottable directory
+   * @throws IOException
+   */
+  public SnapshotStatus[] getSnapshotListing(Path snapshotRoot)
+      throws IOException {
+    return dfs.getSnapshotListing(getPathName(snapshotRoot));

Review comment:
       * Should relative path be resolved as well?
   `    Path absF = fixRelativePart(path);`
   * Should Increment Read Statistics as well.
   `    statistics.incrementReadOps(1);`

##########
File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
##########
@@ -2190,6 +2191,24 @@ public void renameSnapshot(String snapshotDir, String snapshotOldName,
     }
   }
 
+  /**
+   * Get listing of all the snapshots for a snapshottable directory
+   *
+   * @return Information about all the snapshots for a snapshottable directory
+   * @throws IOException If an I/O error occurred
+   * @see ClientProtocol#getSnapshotListing()
+   */
+  public SnapshotStatus[] getSnapshotListing(String snapshotRoot)
+      throws IOException {
+    checkOpen();
+    try (TraceScope ignored = tracer.newScope("getSnapshottableDirListing")) {

Review comment:
       Seems Copy-paste error, Change to 
       try (TraceScope ignored = tracer.newScope("getSnapshotListing")) {

##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSnapshotOp.java
##########
@@ -155,6 +156,23 @@ static void renameSnapshot(FSDirectory fsd, FSPermissionChecker pc,
     }
   }
 
+  static SnapshotStatus[] getSnapshotListing(
+      FSDirectory fsd, SnapshotManager snapshotManager, String path)
+      throws IOException {
+    FSPermissionChecker pc = fsd.getPermissionChecker();

Review comment:
       Can get `pc` out of `fsn` lock and pass on from `FsNamesystem` by getting before taking lock, Would save lock retention time, the FsDirectory lock below is just dummy just asserts whether FSN lock is there or not

##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
##########
@@ -471,7 +473,35 @@ public void write(DataOutput out) throws IOException {
     return statusList.toArray(
         new SnapshottableDirectoryStatus[statusList.size()]);
   }
-  
+
+  /**
+   * List all the snapshots under a snapshottable directory.
+   */
+  public SnapshotStatus[] getSnapshotListing(INodesInPath iip)
+      throws IOException {
+    INodeDirectory srcRoot = getSnapshottableRoot(iip);
+    ReadOnlyList<Snapshot> snapshotList = srcRoot.getDirectorySnapshottableFeature().
+        getSnapshotList();
+    if (snapshotList.isEmpty()) {
+      return null;
+    }
+    List<SnapshotStatus> statusList =
+        new ArrayList<>();

Review comment:
       Isn't the size of the list always be equal to the size of `snapshotList`?, if so and size is already known then no need of having a list and then converting to array, can directly take an array of size same as that of `snapshotList`

##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/snapshot/LsSnapshot.java
##########
@@ -0,0 +1,74 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.tools.snapshot;
+
+import java.io.IOException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.protocol.SnapshotStatus;
+import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+/**
+ * A tool used to list all snapshottable directories that are owned by the
+ * current user. The tool returns all the snapshottable directories if the user
+ * is a super user.
+ */
+@InterfaceAudience.Private
+public class LsSnapshot extends Configured implements Tool {
+  @Override
+  public int run(String[] argv) throws Exception {
+    String description = "hdfs lsSnapshot <snapshotDir>: \n" +
+        "\tGet the list of snapshots for a snapshottable directory.\n";
+
+    if(argv.length != 1) {
+      System.err.println("Usage: \n" + description);
+      return 1;
+    }
+
+    FileSystem fs = FileSystem.get(getConf());
+    if (! (fs instanceof DistributedFileSystem)) {
+      System.err.println(
+          "lsSnapshot can only be used in DistributedFileSystem");
+      return 1;
+    }
+    DistributedFileSystem dfs = (DistributedFileSystem) fs;

Review comment:
       Instead can use
   `    DistributedFileSystem dfs = AdminHelper.getDFS(getConf());`
   This shall handle `ViewFsOverloadScheme` as well.
   This would throw `IllegalArgumentException` so it should be inside the try block

##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestListSnapshot.java
##########
@@ -0,0 +1,132 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode.snapshot;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.protocol.SnapshotStatus;
+import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus;
+import org.apache.hadoop.hdfs.server.namenode.FSNamesystem;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertEquals;
+
+public class TestListSnapshot {
+
+  static final short REPLICATION = 3;
+
+  private final Path dir1 = new Path("/TestSnapshot1");
+
+  Configuration conf;
+  MiniDFSCluster cluster;
+  FSNamesystem fsn;
+  DistributedFileSystem hdfs;
+
+  @Before
+  public void setUp() throws Exception {
+    conf = new Configuration();
+    cluster = new MiniDFSCluster.Builder(conf).numDataNodes(REPLICATION)
+        .build();
+    cluster.waitActive();
+    fsn = cluster.getNamesystem();
+    hdfs = cluster.getFileSystem();
+    hdfs.mkdirs(dir1);
+  }
+
+  @After
+  public void tearDown() throws Exception {
+    if (cluster != null) {
+      cluster.shutdown();
+      cluster = null;
+    }
+  }
+
+  /**
+   * Test listing all the snapshottable directories
+   */
+  @Test(timeout = 60000)
+  public void testListSnapshot() throws Exception {
+    cluster.getNamesystem().getSnapshotManager().setAllowNestedSnapshots(true);
+
+    // Initially there is no snapshottable directories in the system
+    SnapshotStatus[] snapshotStatuses = null;
+    SnapshottableDirectoryStatus[] dirs = hdfs.getSnapshottableDirListing();
+    assertNull(dirs);
+    try {
+      hdfs.getSnapshotListing(dir1);
+    } catch (Exception e) {
+      assertTrue(e.getMessage().contains(
+          "Directory is not a snapshottable directory"));
+    }

Review comment:
       Can use `LambdaTestUtils` :
   
       ``LambdaTestUtils.intercept(SnapshotException.class,
           "Directory is not a " + "snapshottable directory",
           () -> hdfs.getSnapshotListing(dir1)); ``

##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestListSnapshot.java
##########
@@ -0,0 +1,132 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode.snapshot;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.protocol.SnapshotStatus;
+import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus;
+import org.apache.hadoop.hdfs.server.namenode.FSNamesystem;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertEquals;
+
+public class TestListSnapshot {
+
+  static final short REPLICATION = 3;
+
+  private final Path dir1 = new Path("/TestSnapshot1");
+
+  Configuration conf;
+  MiniDFSCluster cluster;
+  FSNamesystem fsn;
+  DistributedFileSystem hdfs;
+
+  @Before
+  public void setUp() throws Exception {
+    conf = new Configuration();
+    cluster = new MiniDFSCluster.Builder(conf).numDataNodes(REPLICATION)
+        .build();
+    cluster.waitActive();
+    fsn = cluster.getNamesystem();
+    hdfs = cluster.getFileSystem();
+    hdfs.mkdirs(dir1);
+  }
+
+  @After
+  public void tearDown() throws Exception {
+    if (cluster != null) {
+      cluster.shutdown();
+      cluster = null;
+    }
+  }
+
+  /**
+   * Test listing all the snapshottable directories
+   */
+  @Test(timeout = 60000)
+  public void testListSnapshot() throws Exception {
+    cluster.getNamesystem().getSnapshotManager().setAllowNestedSnapshots(true);

Review comment:
       `fsn` is already there, can change to:  
     fsn.getSnapshotManager().setAllowNestedSnapshots(true);

##########
File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
##########
@@ -2190,6 +2191,24 @@ public void renameSnapshot(String snapshotDir, String snapshotOldName,
     }
   }
 
+  /**
+   * Get listing of all the snapshots for a snapshottable directory
+   *
+   * @return Information about all the snapshots for a snapshottable directory
+   * @throws IOException If an I/O error occurred
+   * @see ClientProtocol#getSnapshotListing()

Review comment:
       Should be :
   `ClientProtocol#getSnapshotListing(String)`

##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
##########
@@ -7001,7 +7002,33 @@ void renameSnapshot(
     logAuditEvent(true, operationName, null, null, null);
     return status;
   }
-  
+
+  /**
+   * Get the list of snapshots for a given snapshottable directory.
+   *
+   * @return The list of all the snapshots for a snapshottable directory
+   * @throws IOException
+   */
+  public SnapshotStatus[] getSnapshotListing(String snapshotRoot)
+      throws IOException {
+    SnapshotStatus[] status = null;
+    checkOperation(OperationCategory.READ);
+    boolean success = false;
+    readLock();
+    try {
+      checkOperation(OperationCategory.READ);
+      status = FSDirSnapshotOp.getSnapshotListing(dir, snapshotManager,
+          snapshotRoot);
+      success = true;
+    } catch (AccessControlException ace) {
+      logAuditEvent(success, "listSnapshots", null, null, null);

Review comment:
       Should put the path as well in the audit log too
   `      logAuditEvent(success, "listSnapshots", snapshotRoot);`

##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/snapshot/LsSnapshot.java
##########
@@ -0,0 +1,74 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.tools.snapshot;
+
+import java.io.IOException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.protocol.SnapshotStatus;
+import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+/**
+ * A tool used to list all snapshottable directories that are owned by the
+ * current user. The tool returns all the snapshottable directories if the user
+ * is a super user.
+ */
+@InterfaceAudience.Private
+public class LsSnapshot extends Configured implements Tool {
+  @Override
+  public int run(String[] argv) throws Exception {
+    String description = "hdfs lsSnapshot <snapshotDir>: \n" +
+        "\tGet the list of snapshots for a snapshottable directory.\n";
+
+    if(argv.length != 1) {
+      System.err.println("Usage: \n" + description);

Review comment:
       Would be good if there is a error message as well, Something like invalid number of arguments...

##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSnapshotOp.java
##########
@@ -155,6 +156,23 @@ static void renameSnapshot(FSDirectory fsd, FSPermissionChecker pc,
     }
   }
 
+  static SnapshotStatus[] getSnapshotListing(
+      FSDirectory fsd, SnapshotManager snapshotManager, String path)
+      throws IOException {
+    FSPermissionChecker pc = fsd.getPermissionChecker();
+    fsd.readLock();
+    try {
+      INodesInPath iip = fsd.getINodesInPath(path, DirOp.READ);
+      if (fsd.isPermissionEnabled()) {
+        fsd.checkPermission(pc, iip, false, null, null, FsAction.READ,

Review comment:
       Can use instead :
   `fsd.checkPathAccess(pc,iip,FsAction.READ);`

##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
##########
@@ -471,7 +473,35 @@ public void write(DataOutput out) throws IOException {
     return statusList.toArray(
         new SnapshottableDirectoryStatus[statusList.size()]);
   }
-  
+
+  /**
+   * List all the snapshots under a snapshottable directory.
+   */
+  public SnapshotStatus[] getSnapshotListing(INodesInPath iip)
+      throws IOException {
+    INodeDirectory srcRoot = getSnapshottableRoot(iip);
+    ReadOnlyList<Snapshot> snapshotList = srcRoot.getDirectorySnapshottableFeature().
+        getSnapshotList();
+    if (snapshotList.isEmpty()) {
+      return null;

Review comment:
       Do you want to pass `null`, if there is no snapshots, Can't we pass an empty list?
   It might create problem for some client checking the size of the array to conclude if there are snapshots or not, his code would break will a NPE.

##########
File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterSnapshot.java
##########
@@ -157,6 +158,22 @@ public void renameSnapshot(String snapshotRoot, String oldSnapshotName,
     return RouterRpcServer.merge(ret, SnapshottableDirectoryStatus.class);
   }
 
+  public SnapshotStatus[] getSnapshotListing(String snapshotRoot)
+      throws IOException {
+    rpcServer.checkOperation(NameNode.OperationCategory.READ);
+    final List<RemoteLocation> locations =
+        rpcServer.getLocationsForPath(snapshotRoot, true, false);
+    RemoteMethod method = new RemoteMethod("getSnapshotListing",
+        new Class<?>[] {String.class},
+        new RemoteParam());
+    Set<FederationNamespaceInfo> nss = namenodeResolver.getNamespaces();
+    Map<FederationNamespaceInfo, SnapshotStatus[]> ret =
+        rpcClient.invokeConcurrent(
+            nss, method, true, false, SnapshotStatus[].class);
+
+    return RouterRpcServer.merge(ret, SnapshotStatus.class);
+  }

Review comment:
       This is messed up.
   * locations need to be passed, as of now you are ignoring the passed parameter itself.
   * invokeConcurrent only in case if the Path is of type `isAll`, you can use `rpcServer.isInvokeConcurrent(snapshotRoot)`
   * Once you make this change I think `RouterRpcServer.merge(.)` won't work, you need to write your own util to aggregate.
   * This needs to be covered by a UT as well, `TestRouterRpc` or `TestRouterRPCMultipleDestinationMountTableResolver` could be a good place to add one.
   
    ** If you have any issue with RBF, let me know, will try to get you the code. :-)

##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/snapshot/LsSnapshot.java
##########
@@ -0,0 +1,74 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.tools.snapshot;
+
+import java.io.IOException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.protocol.SnapshotStatus;
+import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+/**
+ * A tool used to list all snapshottable directories that are owned by the
+ * current user. The tool returns all the snapshottable directories if the user
+ * is a super user.
+ */
+@InterfaceAudience.Private
+public class LsSnapshot extends Configured implements Tool {
+  @Override
+  public int run(String[] argv) throws Exception {
+    String description = "hdfs lsSnapshot <snapshotDir>: \n" +
+        "\tGet the list of snapshots for a snapshottable directory.\n";
+
+    if(argv.length != 1) {
+      System.err.println("Usage: \n" + description);
+      return 1;
+    }
+
+    FileSystem fs = FileSystem.get(getConf());
+    if (! (fs instanceof DistributedFileSystem)) {
+      System.err.println(
+          "lsSnapshot can only be used in DistributedFileSystem");
+      return 1;
+    }
+    DistributedFileSystem dfs = (DistributedFileSystem) fs;
+    Path snapshotRoot = new Path(argv[0]);
+
+    try {
+      SnapshotStatus[] stats = dfs.getSnapshotListing(snapshotRoot);
+      SnapshotStatus.print(stats, System.out);
+    } catch (IOException e) {
+      String[] content = e.getLocalizedMessage().split("\n");
+      System.err.println("lsSnapshot: " + content[0]);
+      e.printStackTrace(System.err);
+      return 1;

Review comment:
       I don't think we need to print the stack trace on the CLI? Just a line of error should work? Mostly it should trigger for `FNF` or for `SnapshotException` if the directory is not snapshottable. if required we can have the exception with trace in the `debug` log 
   Apart we should catch `Exception` as well for any runtime exceptions, propagating the exception in CLI won't look good




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org