You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Bo Shi (JIRA)" <ji...@apache.org> on 2008/09/10 02:43:44 UTC

[jira] Commented: (HADOOP-4019) Hadoop 0.18.0 does not start on Mac OS X Leopard

    [ https://issues.apache.org/jira/browse/HADOOP-4019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12629671#action_12629671 ] 

Bo Shi commented on HADOOP-4019:
--------------------------------

For us, the issue is that we were using an out-of-date hadoop-defaults.xml file...

In DataNode.java:359:

    //init ipc server
    InetSocketAddress ipcAddr = NetUtils.createSocketAddr(
        conf.get("dfs.datanode.ipc.address"));
    ipcServer = RPC.getServer(this, ipcAddr.getHostName(), ipcAddr.getPort(), 
        conf.getInt("dfs.datanode.handler.count", 3), false, conf);
    ipcServer.start();
    dnRegistration.setIpcPort(ipcServer.getListenerAddress().getPort());

"dfs.datanode.ipc.address" was not set in our default or site configuration files.  Copying the hadoop-defaults.xml file from the 0.18.0 release tarball to our grid configuration directories resolves the NullPointerException and allows HDFS to come up normally.



> Hadoop 0.18.0 does not start on Mac OS X Leopard
> ------------------------------------------------
>
>                 Key: HADOOP-4019
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4019
>             Project: Hadoop Core
>          Issue Type: Bug
>    Affects Versions: 0.18.0
>         Environment: Mac OS X 10.5 Leopard
>            Reporter: Mateusz Berezecki
>
> The configuration files used are simply copied from the 0.17.2 version which
> was working for me. Suddenly 0.18 does not even start.
> 2008-08-25 21:13:08,411 INFO org.apache.hadoop.dfs.DataNode: STARTUP_MSG: 
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = m.local/172.16.0.10
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.18.1-dev
> STARTUP_MSG:   build =  -r ; compiled by 'm' on Mon Aug 25 20:41:23 CEST 2008
> ************************************************************/
> 2008-08-25 21:13:08,643 INFO org.apache.hadoop.dfs.Storage: Storage directory /hadoop/datanode/data is not formatted.
> 2008-08-25 21:13:08,644 INFO org.apache.hadoop.dfs.Storage: Formatting ...
> 2008-08-25 21:13:08,718 INFO org.apache.hadoop.dfs.DataNode: Registered FSDatasetStatusMBean
> 2008-08-25 21:13:08,722 INFO org.apache.hadoop.dfs.DataNode: Opened info server at 50010
> 2008-08-25 21:13:08,728 INFO org.apache.hadoop.dfs.DataNode: Balancing bandwith is 1048576 bytes/s
> 2008-08-25 21:13:08,861 INFO org.mortbay.util.Credential: Checking Resource aliases
> 2008-08-25 21:13:08,935 INFO org.mortbay.http.HttpServer: Version Jetty/5.1.4
> 2008-08-25 21:13:09,353 INFO org.mortbay.util.Container: Started org.mortbay.jetty.servlet.WebApplicationHandler@baf4ae
> 2008-08-25 21:13:09,397 INFO org.mortbay.util.Container: Started WebApplicationContext[/,/]
> 2008-08-25 21:13:09,397 INFO org.mortbay.util.Container: Started HttpContext[/logs,/logs]
> 2008-08-25 21:13:09,398 INFO org.mortbay.util.Container: Started HttpContext[/static,/static]
> 2008-08-25 21:13:09,399 INFO org.mortbay.http.SocketListener: Started SocketListener on 0.0.0.0:50075
> 2008-08-25 21:13:09,400 INFO org.mortbay.util.Container: Started org.mortbay.jetty.Server@395aaf
> 2008-08-25 21:13:09,408 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=DataNode, sessionId=null
> 2008-08-25 21:13:09,419 ERROR org.apache.hadoop.dfs.DataNode: java.lang.NullPointerException
>         at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:130)
>         at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:119)
>         at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:359)
>         at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:190)
>         at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:2987)
>         at org.apache.hadoop.dfs.DataNode.instantiateDataNode(DataNode.java:2942)
>         at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:2950)
>         at org.apache.hadoop.dfs.DataNode.main(DataNode.java:3072)
> 2008-08-25 21:13:09,419 INFO org.apache.hadoop.dfs.DataNode: SHUTDOWN_MSG: 
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at m.local/172.16.0.10
> ************************************************************/

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.