You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Eli Collins (JIRA)" <ji...@apache.org> on 2011/08/03 04:24:27 UTC

[jira] [Created] (HADOOP-7503) Client#getRemotePrincipal NPEs when given invalid dfs.*.name

Client#getRemotePrincipal NPEs when given invalid dfs.*.name
------------------------------------------------------------

                 Key: HADOOP-7503
                 URL: https://issues.apache.org/jira/browse/HADOOP-7503
             Project: Hadoop Common
          Issue Type: Bug
          Components: ipc, security
    Affects Versions: 0.20.203.0, 0.23.0
            Reporter: Eli Collins


The following code in Client#getRemotePrincipal NPEs if security is enabled and dfs.https.address, dfs.secondary.http.address, dfs.secondary.https.address, or fs.default.name, has an invalid value (eg hdfs://foo.bar.com.foo.bar.com:1000). We should check address.checkAddress() for null (or check this earlier)  and give a more helpful error message.

{noformat}
  return SecurityUtil.getServerPrincipal(conf.get(serverKey), address
    .getAddress().getCanonicalHostName());
{noformat}


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Assigned] (HADOOP-7503) Client#getRemotePrincipal NPEs when given invalid dfs.*.name

Posted by "Todd Lipcon (Assigned) (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Todd Lipcon reassigned HADOOP-7503:
-----------------------------------

    Assignee: Shingo Furuyama  (was: Sho Shimauchi)
    
> Client#getRemotePrincipal NPEs when given invalid dfs.*.name
> ------------------------------------------------------------
>
>                 Key: HADOOP-7503
>                 URL: https://issues.apache.org/jira/browse/HADOOP-7503
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: ipc, security
>    Affects Versions: 0.20.203.0, 0.23.0
>            Reporter: Eli Collins
>            Assignee: Shingo Furuyama
>              Labels: newbie
>         Attachments: HADOOP-7503.patch
>
>
> The following code in Client#getRemotePrincipal NPEs if security is enabled and dfs.https.address, dfs.secondary.http.address, dfs.secondary.https.address, or fs.default.name, has an invalid value (eg hdfs://foo.bar.com.foo.bar.com:1000). We should check address.checkAddress() for null (or check this earlier)  and give a more helpful error message.
> {noformat}
>   return SecurityUtil.getServerPrincipal(conf.get(serverKey), address
>     .getAddress().getCanonicalHostName());
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Assigned] (HADOOP-7503) Client#getRemotePrincipal NPEs when given invalid dfs.*.name

Posted by "Sho Shimauchi (Assigned) (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sho Shimauchi reassigned HADOOP-7503:
-------------------------------------

    Assignee: Sho Shimauchi
    
> Client#getRemotePrincipal NPEs when given invalid dfs.*.name
> ------------------------------------------------------------
>
>                 Key: HADOOP-7503
>                 URL: https://issues.apache.org/jira/browse/HADOOP-7503
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: ipc, security
>    Affects Versions: 0.20.203.0, 0.23.0
>            Reporter: Eli Collins
>            Assignee: Sho Shimauchi
>              Labels: newbie
>
> The following code in Client#getRemotePrincipal NPEs if security is enabled and dfs.https.address, dfs.secondary.http.address, dfs.secondary.https.address, or fs.default.name, has an invalid value (eg hdfs://foo.bar.com.foo.bar.com:1000). We should check address.checkAddress() for null (or check this earlier)  and give a more helpful error message.
> {noformat}
>   return SecurityUtil.getServerPrincipal(conf.get(serverKey), address
>     .getAddress().getCanonicalHostName());
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-7503) Client#getRemotePrincipal NPEs when given invalid dfs.*.name

Posted by "Eli Collins (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13112278#comment-13112278 ] 

Eli Collins commented on HADOOP-7503:
-------------------------------------

Thanks for following up Uma. I think so, a test which enables security and sets a bogus dfs.https.address would confirm.

> Client#getRemotePrincipal NPEs when given invalid dfs.*.name
> ------------------------------------------------------------
>
>                 Key: HADOOP-7503
>                 URL: https://issues.apache.org/jira/browse/HADOOP-7503
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: ipc, security
>    Affects Versions: 0.20.203.0, 0.23.0
>            Reporter: Eli Collins
>              Labels: newbie
>
> The following code in Client#getRemotePrincipal NPEs if security is enabled and dfs.https.address, dfs.secondary.http.address, dfs.secondary.https.address, or fs.default.name, has an invalid value (eg hdfs://foo.bar.com.foo.bar.com:1000). We should check address.checkAddress() for null (or check this earlier)  and give a more helpful error message.
> {noformat}
>   return SecurityUtil.getServerPrincipal(conf.get(serverKey), address
>     .getAddress().getCanonicalHostName());
> {noformat}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-7503) Client#getRemotePrincipal NPEs when given invalid dfs.*.name

Posted by "Sho Shimauchi (Commented) (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13216286#comment-13216286 ] 

Sho Shimauchi commented on HADOOP-7503:
---------------------------------------

I've talked with Shingo Furuyama and he has taken over this jira.
Could someone assign to him?
                
> Client#getRemotePrincipal NPEs when given invalid dfs.*.name
> ------------------------------------------------------------
>
>                 Key: HADOOP-7503
>                 URL: https://issues.apache.org/jira/browse/HADOOP-7503
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: ipc, security
>    Affects Versions: 0.20.203.0, 0.23.0
>            Reporter: Eli Collins
>            Assignee: Sho Shimauchi
>              Labels: newbie
>
> The following code in Client#getRemotePrincipal NPEs if security is enabled and dfs.https.address, dfs.secondary.http.address, dfs.secondary.https.address, or fs.default.name, has an invalid value (eg hdfs://foo.bar.com.foo.bar.com:1000). We should check address.checkAddress() for null (or check this earlier)  and give a more helpful error message.
> {noformat}
>   return SecurityUtil.getServerPrincipal(conf.get(serverKey), address
>     .getAddress().getCanonicalHostName());
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-7503) Client#getRemotePrincipal NPEs when given invalid dfs.*.name

Posted by "Shingo Furuyama (Updated) (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Shingo Furuyama updated HADOOP-7503:
------------------------------------

    Attachment: HADOOP-7503.patch

Hi guys, 

I attatch a patch which tests this error case.
                
> Client#getRemotePrincipal NPEs when given invalid dfs.*.name
> ------------------------------------------------------------
>
>                 Key: HADOOP-7503
>                 URL: https://issues.apache.org/jira/browse/HADOOP-7503
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: ipc, security
>    Affects Versions: 0.20.203.0, 0.23.0
>            Reporter: Eli Collins
>            Assignee: Sho Shimauchi
>              Labels: newbie
>         Attachments: HADOOP-7503.patch
>
>
> The following code in Client#getRemotePrincipal NPEs if security is enabled and dfs.https.address, dfs.secondary.http.address, dfs.secondary.https.address, or fs.default.name, has an invalid value (eg hdfs://foo.bar.com.foo.bar.com:1000). We should check address.checkAddress() for null (or check this earlier)  and give a more helpful error message.
> {noformat}
>   return SecurityUtil.getServerPrincipal(conf.get(serverKey), address
>     .getAddress().getCanonicalHostName());
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-7503) Client#getRemotePrincipal NPEs when given invalid dfs.*.name

Posted by "Uma Maheswara Rao G (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13108473#comment-13108473 ] 

Uma Maheswara Rao G commented on HADOOP-7503:
---------------------------------------------

Hi Eli,

 It looks to me that in latest code null check has been handled.

{code}
public static String getServerPrincipal(String principalConfig,
      InetAddress addr) throws IOException {
    String[] components = getComponents(principalConfig);
    if (components == null || components.length != 3
        || !components[1].equals(HOSTNAME_PATTERN)) {
      return principalConfig;
    } else {
      if (addr == null) {
        throw new IOException("Can't replace " + HOSTNAME_PATTERN
            + " pattern since client address is null");
      }
      return replacePattern(components, addr.getCanonicalHostName());
    }
  }
{code}


is it the same place you are talking.
or, you are expecting some more meaning full message. Please confirm to proceed..


Thanks
Uma


> Client#getRemotePrincipal NPEs when given invalid dfs.*.name
> ------------------------------------------------------------
>
>                 Key: HADOOP-7503
>                 URL: https://issues.apache.org/jira/browse/HADOOP-7503
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: ipc, security
>    Affects Versions: 0.20.203.0, 0.23.0
>            Reporter: Eli Collins
>              Labels: newbie
>
> The following code in Client#getRemotePrincipal NPEs if security is enabled and dfs.https.address, dfs.secondary.http.address, dfs.secondary.https.address, or fs.default.name, has an invalid value (eg hdfs://foo.bar.com.foo.bar.com:1000). We should check address.checkAddress() for null (or check this earlier)  and give a more helpful error message.
> {noformat}
>   return SecurityUtil.getServerPrincipal(conf.get(serverKey), address
>     .getAddress().getCanonicalHostName());
> {noformat}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira