You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by GitBox <gi...@apache.org> on 2021/02/15 13:07:20 UTC

[GitHub] [ozone] siddhantsangwan opened a new pull request #1919: HDDS-4816. Add DiskMetricsSubCommand to get Datanode usage information.

siddhantsangwan opened a new pull request #1919:
URL: https://github.com/apache/ozone/pull/1919


   ## What changes were proposed in this pull request?
   
   The `DiskMetricsSubCommand` is a datanode admin sub command that can be used to query information such as Capacity, SCMUsed, and Remaining space in a datanode. IP address or UUID is used to get the datanode.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-4816
   
   ## How was this patch tested?
   
   Manually tested using docker-compose. Screenshots have been attached.
   
   Initial, before adding keys:
   <img width="1098" alt="initial" src="https://user-images.githubusercontent.com/34305492/107949969-11512f00-6fbc-11eb-95b9-8ff68325db2d.png">
   
   After adding keys:
   <img width="607" alt="keys added" src="https://user-images.githubusercontent.com/34305492/107949996-1d3cf100-6fbc-11eb-8681-baec79310139.png">
   
   Using UUID:
   <img width="1098" alt="uuid" src="https://user-images.githubusercontent.com/34305492/107950069-3a71bf80-6fbc-11eb-9efe-5b54a3cef16f.png">
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] siddhantsangwan commented on a change in pull request #1919: HDDS-4816. Add DiskMetricsSubCommand to get Datanode usage information.

Posted by GitBox <gi...@apache.org>.
siddhantsangwan commented on a change in pull request #1919:
URL: https://github.com/apache/ozone/pull/1919#discussion_r580805095



##########
File path: hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
##########
@@ -618,6 +607,59 @@ public boolean getReplicationManagerStatus() {
     return scm.getReplicationManager().isRunning();
   }
 
+  /**
+   * Get Datanode disk metrics (such as capacity, used) by ip or uuid.
+   *
+   * @param ipaddress
+   * @param uuid
+   * @return DatanodeDiskMetrics
+   * @throws IOException
+   */
+  @Override
+  public HddsProtos.DatanodeDiskMetrics getDatanodeDiskMetrics(String ipaddress,
+                                                         String uuid)
+      throws IOException {
+
+    // check admin authorisation
+    String remoteUser = getRpcRemoteUsername();
+    try {
+      getScm().checkAdminAccess(remoteUser);
+    } catch (IOException e) {
+      LOG.error("Authorisation failed", e);
+      throw e;
+    }
+
+    // get datanode by ip or uuid
+    DatanodeDetails node = null;
+    if (!Strings.isNullOrEmpty(uuid)) {
+      node = scm.getScmNodeManager().getNodeByUuid(uuid);
+    } else if (!Strings.isNullOrEmpty(ipaddress)) {
+      List<DatanodeDetails> nodes = scm.getScmNodeManager()
+          .getNodesByAddress(ipaddress);
+      // currently only the first datanode in the list is being queried
+      node = nodes.get(0);

Review comment:
       Sure, I'm adding support for this.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] siddhantsangwan commented on a change in pull request #1919: HDDS-4816. Add DiskMetricsSubCommand to get Datanode usage information.

Posted by GitBox <gi...@apache.org>.
siddhantsangwan commented on a change in pull request #1919:
URL: https://github.com/apache/ozone/pull/1919#discussion_r580804240



##########
File path: hadoop-hdds/interface-admin/src/main/proto/ScmAdminProtocol.proto
##########
@@ -250,6 +253,18 @@ message NodeQueryResponseProto {
   repeated Node datanodes = 1;
 }
 
+/*
+  Request for disk info of datanode with the specified ipaddress or uuid.
+*/
+message DatanodeDiskMetricsRequestProto {
+  optional string ipaddress = 1;
+  optional string uuid = 2;
+}
+
+message DatanodeDiskMetricsResponseProto {
+  optional DatanodeDiskMetrics metrics = 1;

Review comment:
       Yes, I'll be changing this.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] lokeshj1703 commented on a change in pull request #1919: HDDS-4816. Add DiskMetricsSubCommand to get Datanode usage information.

Posted by GitBox <gi...@apache.org>.
lokeshj1703 commented on a change in pull request #1919:
URL: https://github.com/apache/ozone/pull/1919#discussion_r581837324



##########
File path: hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
##########
@@ -618,6 +620,73 @@ public boolean getReplicationManagerStatus() {
     return scm.getReplicationManager().isRunning();
   }
 
+  /**
+   * Get Datanode usage info (such as capacity, used) by ip or uuid.
+   *
+   * @param ipaddress - Datanode Address String
+   * @param uuid - Datanode UUID String
+   * @return List of DatanodeUsageInfo. Each element contains usage info such
+   * as capacity, SCMUsed, and remaining space.
+   * @throws IOException
+   */
+  @Override
+  public List<HddsProtos.DatanodeUsageInfo> getDatanodeUsageInfo(
+      String ipaddress, String uuid) throws IOException {
+
+    // check admin authorisation
+    String remoteUser = getRpcRemoteUsername();
+    try {
+      getScm().checkAdminAccess(remoteUser);
+    } catch (IOException e) {
+      LOG.error("Authorisation failed", e);
+      throw e;
+    }
+
+    // get datanodes by ip or uuid
+    List<DatanodeDetails> nodes = new ArrayList<>();
+    if (!Strings.isNullOrEmpty(uuid)) {
+      nodes.add(scm.getScmNodeManager().getNodeByUuid(uuid));
+    } else if (!Strings.isNullOrEmpty(ipaddress)) {
+      nodes = scm.getScmNodeManager().getNodesByAddress(ipaddress);
+    } else {
+      throw new IOException(
+          "Could not get datanode with the specified parameters."
+      );
+    }
+
+    // get datanode usage info
+    List<HddsProtos.DatanodeUsageInfo> infoList = new ArrayList<>();
+    for (DatanodeDetails node : nodes) {
+      infoList.add(getUsageInfoFromDatanodeDetails(node));
+    }
+
+    return infoList;
+  }
+
+  /**
+   * Get usage details for a specific DatanodeDetails node.
+   *
+   * @param node - DatanodeDetails
+   * @return Usage info such as capacity, SCMUsed, and remaining space.
+   * @throws IOException
+   */
+  public HddsProtos.DatanodeUsageInfo getUsageInfoFromDatanodeDetails(

Review comment:
       We can make it private.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] siddhantsangwan commented on a change in pull request #1919: HDDS-4816. Add DiskMetricsSubCommand to get Datanode usage information.

Posted by GitBox <gi...@apache.org>.
siddhantsangwan commented on a change in pull request #1919:
URL: https://github.com/apache/ozone/pull/1919#discussion_r580804033



##########
File path: hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
##########
@@ -618,6 +607,59 @@ public boolean getReplicationManagerStatus() {
     return scm.getReplicationManager().isRunning();
   }
 
+  /**
+   * Get Datanode disk metrics (such as capacity, used) by ip or uuid.
+   *
+   * @param ipaddress
+   * @param uuid
+   * @return DatanodeDiskMetrics
+   * @throws IOException
+   */
+  @Override
+  public HddsProtos.DatanodeDiskMetrics getDatanodeDiskMetrics(String ipaddress,
+                                                         String uuid)
+      throws IOException {
+
+    // check admin authorisation
+    String remoteUser = getRpcRemoteUsername();
+    try {
+      getScm().checkAdminAccess(remoteUser);
+    } catch (IOException e) {
+      LOG.error("Authorisation failed", e);
+      throw e;
+    }
+
+    // get datanode by ip or uuid
+    DatanodeDetails node = null;
+    if (!Strings.isNullOrEmpty(uuid)) {
+      node = scm.getScmNodeManager().getNodeByUuid(uuid);
+    } else if (!Strings.isNullOrEmpty(ipaddress)) {
+      List<DatanodeDetails> nodes = scm.getScmNodeManager()
+          .getNodesByAddress(ipaddress);

Review comment:
       Letting ipaddress and UUID stay exclusive as per our discussion. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] siddhantsangwan commented on a change in pull request #1919: HDDS-4816. Add UsageInfoSubcommand to get Datanode usage information.

Posted by GitBox <gi...@apache.org>.
siddhantsangwan commented on a change in pull request #1919:
URL: https://github.com/apache/ozone/pull/1919#discussion_r582546009



##########
File path: hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
##########
@@ -618,6 +620,73 @@ public boolean getReplicationManagerStatus() {
     return scm.getReplicationManager().isRunning();
   }
 
+  /**
+   * Get Datanode usage info (such as capacity, used) by ip or uuid.
+   *
+   * @param ipaddress - Datanode Address String
+   * @param uuid - Datanode UUID String
+   * @return List of DatanodeUsageInfo. Each element contains usage info such
+   * as capacity, SCMUsed, and remaining space.
+   * @throws IOException
+   */
+  @Override
+  public List<HddsProtos.DatanodeUsageInfo> getDatanodeUsageInfo(
+      String ipaddress, String uuid) throws IOException {
+
+    // check admin authorisation
+    String remoteUser = getRpcRemoteUsername();
+    try {
+      getScm().checkAdminAccess(remoteUser);
+    } catch (IOException e) {
+      LOG.error("Authorisation failed", e);
+      throw e;
+    }
+
+    // get datanodes by ip or uuid
+    List<DatanodeDetails> nodes = new ArrayList<>();
+    if (!Strings.isNullOrEmpty(uuid)) {
+      nodes.add(scm.getScmNodeManager().getNodeByUuid(uuid));
+    } else if (!Strings.isNullOrEmpty(ipaddress)) {
+      nodes = scm.getScmNodeManager().getNodesByAddress(ipaddress);
+    } else {
+      throw new IOException(
+          "Could not get datanode with the specified parameters."
+      );
+    }
+
+    // get datanode usage info
+    List<HddsProtos.DatanodeUsageInfo> infoList = new ArrayList<>();
+    for (DatanodeDetails node : nodes) {
+      infoList.add(getUsageInfoFromDatanodeDetails(node));
+    }
+
+    return infoList;
+  }
+
+  /**
+   * Get usage details for a specific DatanodeDetails node.
+   *
+   * @param node - DatanodeDetails
+   * @return Usage info such as capacity, SCMUsed, and remaining space.
+   * @throws IOException
+   */
+  public HddsProtos.DatanodeUsageInfo getUsageInfoFromDatanodeDetails(

Review comment:
       Done.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] siddhantsangwan commented on a change in pull request #1919: HDDS-4816. Add DiskMetricsSubCommand to get Datanode usage information.

Posted by GitBox <gi...@apache.org>.
siddhantsangwan commented on a change in pull request #1919:
URL: https://github.com/apache/ozone/pull/1919#discussion_r580961898



##########
File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/freon/TestOzoneClientKeyGenerator.java
##########
@@ -16,21 +16,20 @@
  */
 package org.apache.hadoop.ozone.freon;
 
-import java.io.File;
-import java.io.FileOutputStream;
-import java.io.IOException;
-
+import org.apache.commons.io.FileUtils;

Review comment:
       Reverting this file to a previous version.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] lokeshj1703 commented on a change in pull request #1919: HDDS-4816. Add DiskMetricsSubCommand to get Datanode usage information.

Posted by GitBox <gi...@apache.org>.
lokeshj1703 commented on a change in pull request #1919:
URL: https://github.com/apache/ozone/pull/1919#discussion_r580018330



##########
File path: hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/scm/protocolPB/StorageContainerLocationProtocolClientSideTranslatorPB.java
##########
@@ -16,56 +16,16 @@
  */
 package org.apache.hadoop.hdds.scm.protocolPB;
 
-import java.io.Closeable;
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-import java.util.function.Consumer;
-
+import com.google.common.base.Preconditions;
+import com.google.protobuf.RpcController;
+import com.google.protobuf.ServiceException;
 import org.apache.commons.lang3.tuple.Pair;
 import org.apache.hadoop.hdds.annotation.InterfaceAudience;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.GetScmInfoResponseProto;
 import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.SafeModeRuleStatusProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.GetSafeModeRuleStatusesResponseProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.GetSafeModeRuleStatusesRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ActivatePipelineRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ClosePipelineRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ContainerRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ContainerResponseProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.DeactivatePipelineRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ForceExitSafeModeRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ForceExitSafeModeResponseProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.GetContainerRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.GetContainerWithPipelineRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.GetContainerWithPipelineBatchRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.GetPipelineRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.GetPipelineResponseProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.InSafeModeRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ListPipelineRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ListPipelineResponseProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.NodeQueryRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.NodeQueryResponseProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.SCMCloseContainerRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.PipelineRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.PipelineResponseProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ReplicationManagerStatusRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ReplicationManagerStatusResponseProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.SCMDeleteContainerRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.SCMListContainerRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.SCMListContainerResponseProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ScmContainerLocationRequest;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.*;

Review comment:
       Star import.

##########
File path: hadoop-hdds/interface-admin/src/main/proto/ScmAdminProtocol.proto
##########
@@ -250,6 +253,18 @@ message NodeQueryResponseProto {
   repeated Node datanodes = 1;
 }
 
+/*
+  Request for disk info of datanode with the specified ipaddress or uuid.
+*/
+message DatanodeDiskMetricsRequestProto {
+  optional string ipaddress = 1;
+  optional string uuid = 2;
+}
+
+message DatanodeDiskMetricsResponseProto {
+  optional DatanodeDiskMetrics metrics = 1;

Review comment:
       We can make this repeated since we will be listing usage for multiple nodes.

##########
File path: hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
##########
@@ -70,24 +62,21 @@
 import org.apache.hadoop.ipc.ProtobufRpcEngine;
 import org.apache.hadoop.ipc.RPC;
 import org.apache.hadoop.ipc.Server;
-import org.apache.hadoop.ozone.audit.AuditAction;
-import org.apache.hadoop.ozone.audit.AuditEventStatus;
-import org.apache.hadoop.ozone.audit.AuditLogger;
-import org.apache.hadoop.ozone.audit.AuditLoggerType;
-import org.apache.hadoop.ozone.audit.AuditMessage;
-import org.apache.hadoop.ozone.audit.Auditor;
-import org.apache.hadoop.ozone.audit.SCMAction;
+import org.apache.hadoop.ozone.audit.*;
+import org.apache.ratis.thirdparty.com.google.common.base.Strings;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.util.*;
+import java.util.stream.Collectors;
 
-import com.google.protobuf.ProtocolMessageEnum;
 import static org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.StorageContainerLocationProtocolService.newReflectiveBlockingService;
-import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_CLIENT_ADDRESS_KEY;
-import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_HANDLER_COUNT_DEFAULT;
-import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_HANDLER_COUNT_KEY;
+import static org.apache.hadoop.hdds.scm.ScmConfigKeys.*;

Review comment:
       Star import.

##########
File path: hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/datanode/DiskMetricsSubCommand.java
##########
@@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.cli.datanode;
+
+import com.google.common.base.Strings;
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.cli.ScmSubcommand;
+import org.apache.hadoop.hdds.scm.client.ScmClient;
+import picocli.CommandLine;
+import picocli.CommandLine.Command;
+
+import java.io.IOException;
+import java.text.NumberFormat;
+
+/**
+ * Command to list the disk metrics of a datanode.
+ */
+@Command(
+    name = "disk-metrics",
+    description = "List disk metrics " +
+        "(such as Capacity, SCMUsed, Remaining) of a datanode by IP address " +
+        "or UUID",
+    mixinStandardHelpOptions = true,
+    versionProvider = HddsVersionProvider.class)
+public class DiskMetricsSubCommand extends ScmSubcommand {
+
+  @CommandLine.Option(names = {"--ip"}, paramLabel = "IP", description =
+      "Show info by datanode ip address")
+  private String ipaddress;
+
+  @CommandLine.Option(names = {"--uuid"}, paramLabel = "UUID", description =
+      "Show info by datanode UUID")
+  private String uuid;
+
+  public String getIpaddress() {
+    return ipaddress;
+  }
+
+  public void setIpaddress(String ipaddress) {
+    this.ipaddress = ipaddress;
+  }
+
+  public String getUuid() {
+    return uuid;
+  }
+
+  public void setUuid(String uuid) {
+    this.uuid = uuid;
+  }
+
+  @Override
+  public void execute(ScmClient scmClient) throws IOException {
+    if (Strings.isNullOrEmpty(ipaddress)) {
+      ipaddress = "";
+    }
+    if (Strings.isNullOrEmpty(uuid)) {
+      uuid = "";
+    }
+    if (Strings.isNullOrEmpty(ipaddress) && Strings.isNullOrEmpty(uuid)) {
+      throw new IOException("ipaddress or uuid of the datanode must be " +
+          "specified.");
+    }
+
+    HddsProtos.DatanodeDiskMetrics metrics =
+        scmClient.getDatanodeDiskMetrics(ipaddress, uuid);
+    Double capacity = Double.parseDouble(metrics.getCapacity());

Review comment:
       We will not need this if we change the data type for proto.

##########
File path: hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
##########
@@ -70,24 +62,21 @@
 import org.apache.hadoop.ipc.ProtobufRpcEngine;
 import org.apache.hadoop.ipc.RPC;
 import org.apache.hadoop.ipc.Server;
-import org.apache.hadoop.ozone.audit.AuditAction;
-import org.apache.hadoop.ozone.audit.AuditEventStatus;
-import org.apache.hadoop.ozone.audit.AuditLogger;
-import org.apache.hadoop.ozone.audit.AuditLoggerType;
-import org.apache.hadoop.ozone.audit.AuditMessage;
-import org.apache.hadoop.ozone.audit.Auditor;
-import org.apache.hadoop.ozone.audit.SCMAction;
+import org.apache.hadoop.ozone.audit.*;

Review comment:
       Star import.

##########
File path: hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocolServerSideTranslatorPB.java
##########
@@ -17,78 +17,33 @@
  */
 package org.apache.hadoop.hdds.scm.protocol;
 
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Map;
-
+import com.google.common.base.Strings;
+import com.google.protobuf.ProtocolMessageEnum;
+import com.google.protobuf.RpcController;
+import com.google.protobuf.ServiceException;
 import org.apache.commons.lang3.tuple.Pair;
 import org.apache.hadoop.hdds.annotation.InterfaceAudience;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ActivatePipelineRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ActivatePipelineResponseProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ClosePipelineRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ClosePipelineResponseProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ContainerRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ContainerResponseProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.DeactivatePipelineRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.DeactivatePipelineResponseProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ForceExitSafeModeRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ForceExitSafeModeResponseProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.GetContainerRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.GetContainerResponseProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.GetContainerWithPipelineBatchRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.GetContainerWithPipelineBatchResponseProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.GetContainerWithPipelineRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.GetContainerWithPipelineResponseProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.GetPipelineRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.GetPipelineResponseProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.InSafeModeRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.InSafeModeResponseProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ListPipelineRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ListPipelineResponseProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.NodeQueryResponseProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.PipelineResponseProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ReplicationManagerStatusRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ReplicationManagerStatusResponseProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.SCMCloseContainerRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.SCMCloseContainerResponseProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.SCMDeleteContainerRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.SCMDeleteContainerResponseProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.SCMListContainerRequestProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.SCMListContainerResponseProto;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ScmContainerLocationRequest;
-import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ScmContainerLocationResponse;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.*;

Review comment:
       Star import.

##########
File path: hadoop-hdds/interface-client/src/main/proto/hdds.proto
##########
@@ -156,6 +156,12 @@ message NodePool {
     repeated Node nodes = 1;
 }
 
+message DatanodeDiskMetrics {

Review comment:
       We can use the int64 type here.

##########
File path: hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
##########
@@ -618,6 +607,59 @@ public boolean getReplicationManagerStatus() {
     return scm.getReplicationManager().isRunning();
   }
 
+  /**
+   * Get Datanode disk metrics (such as capacity, used) by ip or uuid.
+   *
+   * @param ipaddress
+   * @param uuid
+   * @return DatanodeDiskMetrics
+   * @throws IOException
+   */
+  @Override
+  public HddsProtos.DatanodeDiskMetrics getDatanodeDiskMetrics(String ipaddress,
+                                                         String uuid)
+      throws IOException {
+
+    // check admin authorisation
+    String remoteUser = getRpcRemoteUsername();
+    try {
+      getScm().checkAdminAccess(remoteUser);
+    } catch (IOException e) {
+      LOG.error("Authorisation failed", e);
+      throw e;
+    }
+
+    // get datanode by ip or uuid
+    DatanodeDetails node = null;
+    if (!Strings.isNullOrEmpty(uuid)) {
+      node = scm.getScmNodeManager().getNodeByUuid(uuid);
+    } else if (!Strings.isNullOrEmpty(ipaddress)) {
+      List<DatanodeDetails> nodes = scm.getScmNodeManager()
+          .getNodesByAddress(ipaddress);
+      // currently only the first datanode in the list is being queried
+      node = nodes.get(0);

Review comment:
       Lets send info for all the nodes retrieved here.

##########
File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/freon/TestOzoneClientKeyGenerator.java
##########
@@ -16,21 +16,20 @@
  */
 package org.apache.hadoop.ozone.freon;
 
-import java.io.File;
-import java.io.FileOutputStream;
-import java.io.IOException;
-
+import org.apache.commons.io.FileUtils;

Review comment:
       Redundant change to this file.

##########
File path: hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
##########
@@ -618,6 +607,59 @@ public boolean getReplicationManagerStatus() {
     return scm.getReplicationManager().isRunning();
   }
 
+  /**
+   * Get Datanode disk metrics (such as capacity, used) by ip or uuid.
+   *
+   * @param ipaddress
+   * @param uuid
+   * @return DatanodeDiskMetrics
+   * @throws IOException
+   */
+  @Override
+  public HddsProtos.DatanodeDiskMetrics getDatanodeDiskMetrics(String ipaddress,
+                                                         String uuid)
+      throws IOException {
+
+    // check admin authorisation
+    String remoteUser = getRpcRemoteUsername();
+    try {
+      getScm().checkAdminAccess(remoteUser);
+    } catch (IOException e) {
+      LOG.error("Authorisation failed", e);
+      throw e;
+    }
+
+    // get datanode by ip or uuid
+    DatanodeDetails node = null;
+    if (!Strings.isNullOrEmpty(uuid)) {
+      node = scm.getScmNodeManager().getNodeByUuid(uuid);
+    } else if (!Strings.isNullOrEmpty(ipaddress)) {
+      List<DatanodeDetails> nodes = scm.getScmNodeManager()
+          .getNodesByAddress(ipaddress);

Review comment:
       I think we should also handle the case where both are specified and both correspond to different datanodes.

##########
File path: hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
##########
@@ -70,24 +62,21 @@
 import org.apache.hadoop.ipc.ProtobufRpcEngine;
 import org.apache.hadoop.ipc.RPC;
 import org.apache.hadoop.ipc.Server;
-import org.apache.hadoop.ozone.audit.AuditAction;
-import org.apache.hadoop.ozone.audit.AuditEventStatus;
-import org.apache.hadoop.ozone.audit.AuditLogger;
-import org.apache.hadoop.ozone.audit.AuditLoggerType;
-import org.apache.hadoop.ozone.audit.AuditMessage;
-import org.apache.hadoop.ozone.audit.Auditor;
-import org.apache.hadoop.ozone.audit.SCMAction;
+import org.apache.hadoop.ozone.audit.*;
+import org.apache.ratis.thirdparty.com.google.common.base.Strings;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.util.*;

Review comment:
       Star import.

##########
File path: hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
##########
@@ -618,6 +607,59 @@ public boolean getReplicationManagerStatus() {
     return scm.getReplicationManager().isRunning();
   }
 
+  /**
+   * Get Datanode disk metrics (such as capacity, used) by ip or uuid.
+   *
+   * @param ipaddress
+   * @param uuid
+   * @return DatanodeDiskMetrics
+   * @throws IOException
+   */
+  @Override
+  public HddsProtos.DatanodeDiskMetrics getDatanodeDiskMetrics(String ipaddress,
+                                                         String uuid)
+      throws IOException {
+
+    // check admin authorisation
+    String remoteUser = getRpcRemoteUsername();
+    try {
+      getScm().checkAdminAccess(remoteUser);
+    } catch (IOException e) {
+      LOG.error("Authorisation failed", e);
+      throw e;
+    }
+
+    // get datanode by ip or uuid
+    DatanodeDetails node = null;
+    if (!Strings.isNullOrEmpty(uuid)) {
+      node = scm.getScmNodeManager().getNodeByUuid(uuid);
+    } else if (!Strings.isNullOrEmpty(ipaddress)) {
+      List<DatanodeDetails> nodes = scm.getScmNodeManager()
+          .getNodesByAddress(ipaddress);
+      // currently only the first datanode in the list is being queried
+      node = nodes.get(0);
+    } else {
+      throw new IOException(
+          "Could not get datanode with the specified parameters."
+      );
+    }
+
+    // get metrics of the datanode
+    SCMNodeStat stat = scm.getScmNodeManager().getNodeStat(node).get();
+    String capacity = stat.getCapacity().get().toString();
+    String used = stat.getScmUsed().get().toString();
+    String remaining = stat.getRemaining().get().toString();
+
+    HddsProtos.DatanodeDiskMetrics metrics = HddsProtos.DatanodeDiskMetrics
+        .newBuilder()
+        .setCapacity(capacity)
+        .setUsed(used)
+        .setRemaining(remaining)
+        .build();

Review comment:
       We can move this to separate function.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] lokeshj1703 commented on pull request #1919: HDDS-4816. Add UsageInfoSubcommand to get Datanode usage information.

Posted by GitBox <gi...@apache.org>.
lokeshj1703 commented on pull request #1919:
URL: https://github.com/apache/ozone/pull/1919#issuecomment-785765112


   @siddhantsangwan Thanks for the contribution! I have committed the PR to master branch.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] siddhantsangwan commented on a change in pull request #1919: HDDS-4816. Add DiskMetricsSubCommand to get Datanode usage information.

Posted by GitBox <gi...@apache.org>.
siddhantsangwan commented on a change in pull request #1919:
URL: https://github.com/apache/ozone/pull/1919#discussion_r580804521



##########
File path: hadoop-hdds/interface-client/src/main/proto/hdds.proto
##########
@@ -156,6 +156,12 @@ message NodePool {
     repeated Node nodes = 1;
 }
 
+message DatanodeDiskMetrics {

Review comment:
       Makes sense, thanks for the suggestion.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] lokeshj1703 commented on a change in pull request #1919: HDDS-4816. Add DiskMetricsSubCommand to get Datanode usage information.

Posted by GitBox <gi...@apache.org>.
lokeshj1703 commented on a change in pull request #1919:
URL: https://github.com/apache/ozone/pull/1919#discussion_r580042524



##########
File path: hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
##########
@@ -618,6 +607,59 @@ public boolean getReplicationManagerStatus() {
     return scm.getReplicationManager().isRunning();
   }
 
+  /**
+   * Get Datanode disk metrics (such as capacity, used) by ip or uuid.
+   *
+   * @param ipaddress
+   * @param uuid
+   * @return DatanodeDiskMetrics
+   * @throws IOException
+   */
+  @Override
+  public HddsProtos.DatanodeDiskMetrics getDatanodeDiskMetrics(String ipaddress,
+                                                         String uuid)
+      throws IOException {
+
+    // check admin authorisation
+    String remoteUser = getRpcRemoteUsername();
+    try {
+      getScm().checkAdminAccess(remoteUser);
+    } catch (IOException e) {
+      LOG.error("Authorisation failed", e);
+      throw e;
+    }
+
+    // get datanode by ip or uuid
+    DatanodeDetails node = null;
+    if (!Strings.isNullOrEmpty(uuid)) {
+      node = scm.getScmNodeManager().getNodeByUuid(uuid);
+    } else if (!Strings.isNullOrEmpty(ipaddress)) {
+      List<DatanodeDetails> nodes = scm.getScmNodeManager()
+          .getNodesByAddress(ipaddress);

Review comment:
       I guess we can send the usage info for both in this case.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] lokeshj1703 closed pull request #1919: HDDS-4816. Add UsageInfoSubcommand to get Datanode usage information.

Posted by GitBox <gi...@apache.org>.
lokeshj1703 closed pull request #1919:
URL: https://github.com/apache/ozone/pull/1919


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org