You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Greg Senia (JIRA)" <ji...@apache.org> on 2019/05/29 18:35:01 UTC

[jira] [Comment Edited] (HADOOP-14104) Client should always ask namenode for kms provider path.

    [ https://issues.apache.org/jira/browse/HADOOP-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16851162#comment-16851162 ] 

Greg Senia edited comment on HADOOP-14104 at 5/29/19 6:34 PM:
--------------------------------------------------------------

[~daryn] or [~shahrs87] or [~xiaochen] I would be curious if there was a reason this was removed we used distcp and it no longer works as it wants to get to our Remote Clusters KMS Server which is isolated even if not attempting to distcp an encrypted zone so by removing "if (dfs.isHDFSEncryptionEnabled())" we now have situation where 1) folks are able to move TDE/Encrypted data between a local and remote clusters which is undesirable. And now we have to figure out how to open our KMSServer in our remote Cluster for moving data that does not reside in an Encrypted Zone.

Code change

 public Token<?>[] addDelegationTokens(
      final String renewer, Credentials credentials) throws IOException {
    Token<?>[] tokens = super.addDelegationTokens(renewer, credentials);
*    if (dfs.isHDFSEncryptionEnabled()) {
*      KeyProviderDelegationTokenExtension keyProviderDelegationTokenExtension =
          KeyProviderDelegationTokenExtension.
              createKeyProviderDelegationTokenExtension(dfs.getKeyProvider());
      Token<?>[] kpTokens = keyProviderDelegationTokenExtension.
          addDelegationTokens(renewer, credentials);
      if (tokens != null && kpTokens != null) {
        Token<?>[] all = new Token<?>[tokens.length + kpTokens.length];
        System.arraycopy(tokens, 0, all, 0, tokens.length);
        System.arraycopy(kpTokens, 0, all, tokens.length, kpTokens.length);
        tokens = all;
      } else {
        tokens = (tokens != null) ? tokens : kpTokens;
      }
    }
    return tokens;
  }

vs


  @Override
  public Token<?>[] addDelegationTokens(
      final String renewer, Credentials credentials) throws IOException {
    Token<?>[] tokens = super.addDelegationTokens(renewer, credentials);
    URI keyProviderUri = dfs.getKeyProviderUri();
    if (keyProviderUri != null) {
      KeyProviderDelegationTokenExtension keyProviderDelegationTokenExtension =
          KeyProviderDelegationTokenExtension.
              createKeyProviderDelegationTokenExtension(dfs.getKeyProvider());
      Token<?>[] kpTokens = keyProviderDelegationTokenExtension.
          addDelegationTokens(renewer, credentials);
      credentials.addSecretKey(dfs.getKeyProviderMapKey(),
          DFSUtil.string2Bytes(keyProviderUri.toString()));
      if (tokens != null && kpTokens != null) {
        Token<?>[] all = new Token<?>[tokens.length + kpTokens.length];
        System.arraycopy(tokens, 0, all, 0, tokens.length);
        System.arraycopy(kpTokens, 0, all, tokens.length, kpTokens.length);
        tokens = all;
      } else {
        tokens = (tokens != null) ? tokens : kpTokens;
      }
    }
    return tokens;
  }


was (Author: gss2002):
[~daryn] or [~shahrs87] out of curiosity we are in a situation due to the removal of "    if (dfs.isHDFSEncryptionEnabled())" we now have situation where folks are able to move TDE'd data between a local and remote cluster which was undesirable. So previously we were preventing TDE'd data from being moved between clusters. Now we have to open the remote KMSServers ports which were previously blocked from the remote cluster. I guess my question is can we add parameter to prevent distcp or HDFS from looking at remote clusters.  

 public Token<?>[] addDelegationTokens(
      final String renewer, Credentials credentials) throws IOException {
    Token<?>[] tokens = super.addDelegationTokens(renewer, credentials);
    if (dfs.isHDFSEncryptionEnabled()) {
      KeyProviderDelegationTokenExtension keyProviderDelegationTokenExtension =
          KeyProviderDelegationTokenExtension.
              createKeyProviderDelegationTokenExtension(dfs.getKeyProvider());
      Token<?>[] kpTokens = keyProviderDelegationTokenExtension.
          addDelegationTokens(renewer, credentials);
      if (tokens != null && kpTokens != null) {
        Token<?>[] all = new Token<?>[tokens.length + kpTokens.length];
        System.arraycopy(tokens, 0, all, 0, tokens.length);
        System.arraycopy(kpTokens, 0, all, tokens.length, kpTokens.length);
        tokens = all;
      } else {
        tokens = (tokens != null) ? tokens : kpTokens;
      }
    }
    return tokens;
  }

vs


  @Override
  public Token<?>[] addDelegationTokens(
      final String renewer, Credentials credentials) throws IOException {
    Token<?>[] tokens = super.addDelegationTokens(renewer, credentials);
    URI keyProviderUri = dfs.getKeyProviderUri();
    if (keyProviderUri != null) {
      KeyProviderDelegationTokenExtension keyProviderDelegationTokenExtension =
          KeyProviderDelegationTokenExtension.
              createKeyProviderDelegationTokenExtension(dfs.getKeyProvider());
      Token<?>[] kpTokens = keyProviderDelegationTokenExtension.
          addDelegationTokens(renewer, credentials);
      credentials.addSecretKey(dfs.getKeyProviderMapKey(),
          DFSUtil.string2Bytes(keyProviderUri.toString()));
      if (tokens != null && kpTokens != null) {
        Token<?>[] all = new Token<?>[tokens.length + kpTokens.length];
        System.arraycopy(tokens, 0, all, 0, tokens.length);
        System.arraycopy(kpTokens, 0, all, tokens.length, kpTokens.length);
        tokens = all;
      } else {
        tokens = (tokens != null) ? tokens : kpTokens;
      }
    }
    return tokens;
  }

> Client should always ask namenode for kms provider path.
> --------------------------------------------------------
>
>                 Key: HADOOP-14104
>                 URL: https://issues.apache.org/jira/browse/HADOOP-14104
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: kms
>            Reporter: Rushabh S Shah
>            Assignee: Rushabh S Shah
>            Priority: Major
>             Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
>         Attachments: HADOOP-14104-branch-2.8.patch, HADOOP-14104-branch-2.patch, HADOOP-14104-trunk-v1.patch, HADOOP-14104-trunk-v2.patch, HADOOP-14104-trunk-v3.patch, HADOOP-14104-trunk-v4.patch, HADOOP-14104-trunk-v5.patch, HADOOP-14104-trunk.patch
>
>
> According to current implementation of kms provider in client conf, there can only be one kms.
> In multi-cluster environment, if a client is reading encrypted data from multiple clusters it will only get kms token for local cluster.
> Not sure whether the target version is correct or not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org