You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Andrew Wang (JIRA)" <ji...@apache.org> on 2016/05/02 23:57:13 UTC

[jira] [Updated] (HADOOP-12345) Credential length in CredentialsSys.java incorrect

     [ https://issues.apache.org/jira/browse/HADOOP-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrew Wang updated HADOOP-12345:
---------------------------------
    Priority: Critical  (was: Blocker)

I'm downgrading this from a blocker since it's not a regression.

I also spent a little time trying to make a repro. I'm not that familiar with the NFS gateway, and my IDE didn't see a place where mCredentialsLength was being used. I think this requires an actual NFS gateway unit test, which presumably uses this length somewhere in the RPC code.

This is one is probably best handled by one of the NFS gateway experts, e.g. [~brandonli].

> Credential length in CredentialsSys.java incorrect
> --------------------------------------------------
>
>                 Key: HADOOP-12345
>                 URL: https://issues.apache.org/jira/browse/HADOOP-12345
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: nfs
>    Affects Versions: 2.7.0
>            Reporter: Pradeep Nayak Udupi Kadbet
>            Priority: Critical
>
> Hi -
> There is a bug in the way hadoop-nfs sets the credential length in "Credentials" field of the NFS RPC packet when using AUTH_SYS
> In CredentialsSys.java, when we are writing the creds in to XDR object, we set the length as follows:
>  // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> 96     mCredentialsLength = 20 + mHostName.getBytes().length;
> (20 corresponds to 4 bytes for mStamp, 4 bytes for mUID, 4 bytes for mGID, 4 bytes for length field of hostname, 4 bytes for number of aux 4 gids) and this is okay.
> However when we add the length of the hostname to this, we are not adding the extra padded bytes for the hostname (If the length is not a multiple of 4) and thus when the NFS server reads the packet, it returns GARBAGE_ARGS because it doesn't read the uid field when it is expected to read. I can reproduce this issue constantly on machines where the hostname length is not a multiple of 4.
> A possible fix is to do something this:
> int pad = mHostName.getBytes().length % 4;
>  // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> mCredentialsLength = 20 + mHostName.getBytes().length + pad;
> I would be happy to submit the patch but I need some help to commit into mainline. I haven't committed into Hadoop yet.
> Cheers!
> Pradeep



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org