You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-issues@hadoop.apache.org by "Yiqun Lin (Jira)" <ji...@apache.org> on 2023/02/01 14:20:00 UTC

[jira] [Comment Edited] (HDFS-16644) java.io.IOException Invalid token in javax.security.sasl.qop

    [ https://issues.apache.org/jira/browse/HDFS-16644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17683033#comment-17683033 ] 

Yiqun Lin edited comment on HDFS-16644 at 2/1/23 2:19 PM:
----------------------------------------------------------

We also meet this issue in our Hadoop3 cluster. Our DN server is Hadoop 3.3 version, and client version is 2.10.2.

We find that there is one chance that abnormal QOP value(e.g. DI) can be passed and overwrite for DataNode sasl props.
But by default case(HDFS-13541 feature not enabled), the secret should not be passed here. Somehow that there maybe some bug on 2.10.2 version that still pass the secret here.

[~vagarychen], could you please check for this code on branch-2.10. It's very dangerous that once DN sasl props is overwrite with an invalid value, all the data read/write could be impacted. And also here we don't do any validation check for QOP value.

SaslDataTransferServer#doSaslHandshake
{noformat}
  private IOStreamPair doSaslHandshake(Peer peer, OutputStream underlyingOut,
      InputStream underlyingIn, Map<String, String> saslProps,
      CallbackHandler callbackHandler) throws IOException {

    DataInputStream in = new DataInputStream(underlyingIn);
    DataOutputStream out = new DataOutputStream(underlyingOut);

    int magicNumber = in.readInt();
    if (magicNumber != SASL_TRANSFER_MAGIC_NUMBER) {
      throw new InvalidMagicNumberException(magicNumber, 
          dnConf.getEncryptDataTransfer());
    }
    try {
      // step 1
      SaslMessageWithHandshake message = readSaslMessageWithHandshakeSecret(in);
      byte[] secret = message.getSecret();
      String bpid = message.getBpid();
      if (secret != null || bpid != null) {
        // sanity check, if one is null, the other must also not be null
        assert(secret != null && bpid != null);
        String qop = new String(secret, Charsets.UTF_8);
        saslProps.put(Sasl.QOP, qop);   <===== here any QOP value could be set here
      }
...
{noformat}


was (Author: linyiqun):
We also meet this issue in our Hadoop3 cluster. Our DN server is Hadoop 3.3 version, and client version is 2.10.2.

We find that there is one chance that abnormal QOP value(e.g. DI) can be passed and overwrite for DataNode sasl props.
But by default case(HDFS-13541 feature not enabled), the secret should not be passed here. Somehow that there maybe some bug on 2.10.2 version that still pass the secret here.

[~vagarychen], could you please check for this code on branch-2.10. It's very dangerous that once DN sasl props is overwrite with an invalid value. All the data read/write could be impacted. And also here we don't do any validation check for QOP value.

SaslDataTransferServer#doSaslHandshake
{noformat}
  private IOStreamPair doSaslHandshake(Peer peer, OutputStream underlyingOut,
      InputStream underlyingIn, Map<String, String> saslProps,
      CallbackHandler callbackHandler) throws IOException {

    DataInputStream in = new DataInputStream(underlyingIn);
    DataOutputStream out = new DataOutputStream(underlyingOut);

    int magicNumber = in.readInt();
    if (magicNumber != SASL_TRANSFER_MAGIC_NUMBER) {
      throw new InvalidMagicNumberException(magicNumber, 
          dnConf.getEncryptDataTransfer());
    }
    try {
      // step 1
      SaslMessageWithHandshake message = readSaslMessageWithHandshakeSecret(in);
      byte[] secret = message.getSecret();
      String bpid = message.getBpid();
      if (secret != null || bpid != null) {
        // sanity check, if one is null, the other must also not be null
        assert(secret != null && bpid != null);
        String qop = new String(secret, Charsets.UTF_8);
        saslProps.put(Sasl.QOP, qop);   <===== here any QOP value could be set here
      }
...
{noformat}

> java.io.IOException Invalid token in javax.security.sasl.qop
> ------------------------------------------------------------
>
>                 Key: HDFS-16644
>                 URL: https://issues.apache.org/jira/browse/HDFS-16644
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 3.2.1
>            Reporter: Walter Su
>            Priority: Major
>
> deployment:
> server side: kerberos enabled cluster with jdk 1.8 and hdfs-server 3.2.1
> client side:
> I run command hadoop fs -put a test file, with kerberos ticket inited first, and use identical core-site.xml & hdfs-site.xml configuration.
>  using client ver 3.2.1, it succeeds.
>  using client ver 2.8.5, it succeeds.
>  using client ver 2.10.1, it fails. The client side error info is:
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
> 2022-06-27 01:06:15,781 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode{data=FSDataset{dirpath='[/mnt/disk1/hdfs, /mnt/***/hdfs, /mnt/***/hdfs, /mnt/***/hdfs]'}, localName='emr-worker-***.***:9866', datanodeUuid='b1c7f64a-6389-4739-bddf-***', xmitsInProgress=0}:Exception transfering block BP-1187699012-10.****-***:blk_1119803380_46080919 to mirror 10.*****:9866
> java.io.IOException: Invalid token in javax.security.sasl.qop: D
>         at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessage(DataTransferSaslUtil.java:220)
> Once any client ver 2.10.1 connect to hdfs server, the DataNode no longer accepts any client connection, even client ver 3.2.1 cannot connects to hdfs server. The DataNode rejects any client connection. For a short time, all DataNodes rejects client connections. 
> The problem exists even if I replace DataNode with ver 3.3.0 or replace java with jdk 11.
> The problem is fixed if I replace DataNode with ver 3.2.0. I guess the problem is related to HDFS-13541



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org