You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2022/03/18 18:12:18 UTC

[GitHub] [hadoop] ayushtkn commented on a change in pull request #4081: HDFS-13248: Namenode needs to use the actual client IP when going through RBF proxy.

ayushtkn commented on a change in pull request #4081:
URL: https://github.com/apache/hadoop/pull/4081#discussion_r830233015



##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
##########
@@ -1899,7 +1907,27 @@ private void verifySoftwareVersion(DatanodeRegistration dnReg)
     }
   }
 
-  private static String getClientMachine() {
+  private String getClientMachine() {
+    if (ipProxyUsers != null) {
+      // Get the real user (or effective if it isn't a proxy user)
+      UserGroupInformation user = Server.getRemoteUser().getRealUserOrSelf();
+      if (ArrayUtils.contains(ipProxyUsers, user.getShortUserName())) {
+        CallerContext context = CallerContext.getCurrent();
+        if (context != null && context.isContextValid()) {
+          String cc = context.getContext();
+          // if the rpc has a caller context of "clientIp:1.2.3.4,CLI",
+          // return "1.2.3.4" as the client machine.
+          String key = CallerContext.CLIENT_IP_STR +
+              CallerContext.Builder.KEY_VALUE_SEPARATOR;
+          int posn = cc.indexOf(key);
+          if (posn != -1) {
+            posn += key.length();
+            int end = cc.indexOf(",", posn);
+            return end == -1 ? cc.substring(posn) : cc.substring(posn, end);

Review comment:
       Someone passes something like ``ABclientIp:1.2.3.4,CLI``. Guess this logic will still catch that entry right?
   
   Moreover, why don't we store the key & its length as well above when we get ``ipProxyUsers`` rather than doing this concatenation & length computation for each call?

##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRpcServer.java
##########
@@ -59,5 +74,67 @@ public void testNamenodeRpcBindAny() throws IOException {
       conf.unset(DFS_NAMENODE_RPC_BIND_HOST_KEY);
     }
   }
+
+  /**
+   * A test to make sure that if an authorized user adds "clientIp:" to their
+   * caller context, it will be used to make locality decisions on the NN.
+   */
+  @Test
+  public void testNamenodeRpcClientIpProxy()
+      throws InterruptedException, IOException {
+    Configuration conf = new HdfsConfiguration();
+
+    conf.set(DFS_NAMENODE_IP_PROXY_USERS, "fake_joe");

Review comment:
       We should even extend a test where this values isn't set, but someone passes the client details as part of the CallerContext, so in that case it shouldn't be honoured. 

##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRpcServer.java
##########
@@ -59,5 +74,67 @@ public void testNamenodeRpcBindAny() throws IOException {
       conf.unset(DFS_NAMENODE_RPC_BIND_HOST_KEY);
     }
   }
+
+  /**
+   * A test to make sure that if an authorized user adds "clientIp:" to their
+   * caller context, it will be used to make locality decisions on the NN.
+   */
+  @Test
+  public void testNamenodeRpcClientIpProxy()
+      throws InterruptedException, IOException {
+    Configuration conf = new HdfsConfiguration();
+
+    conf.set(DFS_NAMENODE_IP_PROXY_USERS, "fake_joe");
+    // Make 3 nodes & racks so that we have a decent shot of detecting when
+    // our change overrides the random choice of datanode.
+    final String[] racks = new String[]{"/rack1", "/rack2", "/rack3"};
+    final String[] hosts = new String[]{"node1", "node2", "node3"};
+    MiniDFSCluster cluster = null;
+    final CallerContext original = CallerContext.getCurrent();
+
+    try {
+      cluster = new MiniDFSCluster.Builder(conf)
+          .racks(racks).hosts(hosts).numDataNodes(hosts.length)
+          .build();
+      cluster.waitActive();
+      DistributedFileSystem fs = cluster.getFileSystem();
+      // Write a sample file
+      final Path fooName = fs.makeQualified(new Path("/foo"));

Review comment:
       Can you help Why are you making it qualified here? The test passes without that also, and we have only one FS only that is the default I suppose

##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRpcServer.java
##########
@@ -59,5 +69,67 @@ public void testNamenodeRpcBindAny() throws IOException {
       conf.unset(DFS_NAMENODE_RPC_BIND_HOST_KEY);
     }
   }
+
+  /**
+   * A test to make sure that if an authorized user adds "clientIp:" to their
+   * caller context, it will be used to make locality decisions on the NN.
+   */
+  @Test
+  public void testNamenodeRpcClientIpProxy()
+      throws InterruptedException, IOException {
+    Configuration conf = new HdfsConfiguration();
+
+    conf.set(DFS_NAMENODE_IP_PROXY_USERS, "fake_joe");
+    // Make 3 nodes & racks so that we have a decent shot of detecting when
+    // our change overrides the random choice of datanode.
+    final String[] racks = new String[]{"/rack1", "/rack2", "/rack3"};
+    final String[] hosts = new String[]{"node1", "node2", "node3"};
+    MiniDFSCluster cluster = null;
+    final CallerContext original = CallerContext.getCurrent();
+
+    try {
+      cluster = new MiniDFSCluster.Builder(conf)
+          .racks(racks).hosts(hosts).numDataNodes(hosts.length)
+          .build();
+      cluster.waitActive();
+      DistributedFileSystem fs = cluster.getFileSystem();
+      // Write a sample file
+      final Path fooName = fs.makeQualified(new Path("/foo"));
+      FSDataOutputStream stream = fs.create(fooName);
+      stream.write("Hello world!\n".getBytes(StandardCharsets.UTF_8));
+      stream.close();
+      // Set the caller context to set the ip address
+      CallerContext.setCurrent(
+          new CallerContext.Builder("test", conf)
+              .append(CallerContext.CLIENT_IP_STR, hosts[0])
+              .build());
+      // Run as fake joe to authorize the test
+      UserGroupInformation
+          .createUserForTesting("fake_joe", new String[]{"fake_group"})
+          .doAs(new PrivilegedExceptionAction<Object>() {
+            @Override
+            public Object run() throws Exception {
+              // Create a new file system as the joe user
+              DistributedFileSystem joeFs =
+                  (DistributedFileSystem) fooName.getFileSystem(conf);

Review comment:
       Could use DFSTestUtil.getFileSystemAs()




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org