You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Todd Lipcon (JIRA)" <ji...@apache.org> on 2009/05/22 01:43:45 UTC

[jira] Commented: (HADOOP-5626) SecondaryNamenode may report incorrect info host name

    [ https://issues.apache.org/jira/browse/HADOOP-5626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12711864#action_12711864 ] 

Todd Lipcon commented on HADOOP-5626:
-------------------------------------

This is the issue that prevented TestCheckpoint from passing in HADOOP-3694.

My patch is the same spirit as Carlos's but uses InetAddress.isAnyLocalAddress instead of a string compare. Uploading shortly.

> SecondaryNamenode may report incorrect info host name
> -----------------------------------------------------
>
>                 Key: HADOOP-5626
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5626
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>            Reporter: Carlos Valiente
>            Priority: Minor
>             Fix For: 0.21.0
>
>         Attachments: HADOOP-5626.patch
>
>
> I have set up {{dfs.secondary.http.address}} like this:
> {code}
> <property>
>   <name>dfs.secondary.http.address</name>
>   <value>secondary.example.com:50090</value>
> </property>
> {code}
> In my setup {{secondary.example.com}} resolves to an IP address (say, 192.168.0.10) which is not the same as the host's name (as returned by {{InetAddress.getLocalHost().getHostAddress()}}, say 192.168.0.1).
> In this situation, edit log related transfers fail. From the namenode log:
> {code}
> 2009-04-05 13:32:39,128 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 192.168.0.10
> 2009-04-05 13:32:39,168 WARN org.mortbay.log: /getimage: java.io.IOException: GetImage failed. java.net.ConnectException: Connection refused
>         at java.net.PlainSocketImpl.socketConnect(Native Method)
>         at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
>         at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195)
>         at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182)
>         at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
>         at java.net.Socket.connect(Socket.java:519)
>         at java.net.Socket.connect(Socket.java:469)
>         at sun.net.NetworkClient.doConnect(NetworkClient.java:163)
>         at sun.net.www.http.HttpClient.openServer(HttpClient.java:394)
>         at sun.net.www.http.HttpClient.openServer(HttpClient.java:529)
>         at sun.net.www.http.HttpClient.<init>(HttpClient.java:233)
>         at sun.net.www.http.HttpClient.New(HttpClient.java:306)
>         at sun.net.www.http.HttpClient.New(HttpClient.java:323)
>         at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:837)
>         at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:778)
>         at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:703)
>         at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1026)
>         at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.getFileClient(TransferFsImage.java:151)
>         ...
> {code}
> From the secondary namenode log:
> {code}
> 2009-04-05 13:42:39,238 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint: 
> 2009-04-05 13:42:39,238 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: java.io.FileNotFoundException: http://nn.example.com:50070/getimage?putimage=1&port=50090&machine=
> 192.168.0.1&token=-19:1243068779:0:1238929357000:1238929031783
>         at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1288)
>         at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.getFileClient(TransferFsImage.java:151)
>         at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.putFSImage(SecondaryNameNode.java:294)
>         at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:333)
>         at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:239)
>         at java.lang.Thread.run(Thread.java:619)
> {code}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.