You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by 李S <li...@corp.netease.com> on 2011/05/19 05:23:04 UTC

run hadoop pseudo-distribute examples failed

Hi All,
I'm trying to run hadoop(0.20.2) examples in Pseudo-Distributed Mode following the hadoop user guide. After I run the 'start-all.sh', it seems the namenode can't connect to datanode.

'SSH localhost' is OK on my server. Someone advises to rm '/tmp/hadoop-XXXX' and format namenode again, but it doesn't work.  And 'iptables -L' shows there is no firewall rules in my server:
test:/home/liyun2010# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination    
Is there anyone can give me more advice? Thanks!

Bellow is my namenode and datanode log files:
liyun2010@test:~/hadoop-0.20.2/logs$ cat hadoop-liyun2010-namenode-test.puppet.com.log
2011-05-19 10:58:25,938 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = test.puppet.com/127.0.0.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2011-05-19 10:58:26,197 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=NameNode, port=9000
2011-05-19 10:58:26,212 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: test.puppet.com/127.0.0.1:9000
2011-05-19 10:58:26,220 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null
2011-05-19 10:58:26,224 INFO org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
2011-05-19 10:58:26,405 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=liyun2010,users
2011-05-19 10:58:26,406 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2011-05-19 10:58:26,406 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2011-05-19 10:58:26,429 INFO org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
2011-05-19 10:58:26,434 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStatusMBean
2011-05-19 10:58:26,511 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 9
2011-05-19 10:58:26,524 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 1
2011-05-19 10:58:26,530 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 920 loaded in 0 seconds.
2011-05-19 10:58:26,606 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Invalid opcode, reached end of edit log Number of transactions found 99
2011-05-19 10:58:26,606 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /tmp/hadoop-liyun2010/dfs/name/current/edits of size 1049092 edits # 99 loaded in 0 seconds.
2011-05-19 10:58:26,660 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 920 saved in 0 seconds.
2011-05-19 10:58:26,810 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 505 msecs
2011-05-19 10:58:26,825 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 0
2011-05-19 10:58:26,825 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0
2011-05-19 10:58:26,825 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 0
2011-05-19 10:58:26,825 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of  over-replicated blocks = 0
2011-05-19 10:58:26,825 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs.
2011-05-19 10:58:26,826 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2011-05-19 10:58:26,826 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2011-05-19 10:58:27,025 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2011-05-19 10:58:27,174 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
2011-05-19 10:58:27,178 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070
2011-05-19 10:58:27,178 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070
2011-05-19 10:58:27,179 INFO org.mortbay.log: jetty-6.1.14
2011-05-19 10:58:27,269 WARN org.mortbay.log: Can't reuse /tmp/Jetty_0_0_0_0_50070_hdfs____w2cu08, using /tmp/Jetty_0_0_0_0_50070_hdfs____w2cu08_740365192444258489
2011-05-19 10:58:28,610 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50070
2011-05-19 10:58:28,611 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070
2011-05-19 10:58:28,612 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2011-05-19 10:58:28,613 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting
2011-05-19 10:58:28,617 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9000: starting
2011-05-19 10:58:28,618 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9000: starting
2011-05-19 10:58:28,621 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9000: starting
2011-05-19 10:58:28,625 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9000: starting
2011-05-19 10:58:28,625 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9000: starting
2011-05-19 10:58:28,626 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9000: starting
2011-05-19 10:58:28,627 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9000: starting
2011-05-19 10:58:28,629 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9000: starting
2011-05-19 10:58:28,630 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9000: starting
2011-05-19 10:58:28,630 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9000: starting
2011-05-19 10:58:30,680 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:50010 storage DS-173493047-127.0.0.1-50010-1305278767521
2011-05-19 10:58:30,687 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/127.0.0.1:50010
2011-05-19 10:58:39,361 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=liyun2010,users     ip=/127.0.0.1   cmd=listStatus  src=/tmp/hadoop-liyun2010/mapred/system dst=nullperm=null
2011-05-19 10:58:39,393 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=liyun2010,users     ip=/127.0.0.1   cmd=delete      src=/tmp/hadoop-liyun2010/mapred/system dst=nullperm=null
2011-05-19 10:58:39,405 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=liyun2010,users     ip=/127.0.0.1   cmd=mkdirs      src=/tmp/hadoop-liyun2010/mapred/system dst=nullperm=liyun2010:supergroup:rwxr-xr-x
2011-05-19 10:58:39,417 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=liyun2010,users     ip=/127.0.0.1   cmd=setPermission       src=/tmp/hadoop-liyun2010/mapred/system dst=nullperm=liyun2010:supergroup:rwx-wx-wx
2011-05-19 10:58:39,507 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=liyun2010,users     ip=/127.0.0.1   cmd=create      src=/tmp/hadoop-liyun2010/mapred/system/jobtracker.infodst=null        perm=liyun2010:supergroup:rw-r--r--
2011-05-19 10:58:39,530 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=liyun2010,users     ip=/127.0.0.1   cmd=setPermission       src=/tmp/hadoop-liyun2010/mapred/system/jobtracker.info dst=null        perm=liyun2010:supergroup:rw-------
2011-05-19 10:58:39,538 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place enough replicas, still in need of 1
2011-05-19 10:58:39,541 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9000, call addBlock(/tmp/hadoop-liyun2010/mapred/system/jobtracker.info, DFSClient_1143649887) from 127.0.0.1:56940: error: java.io.IOException: File /tmp/hadoop-liyun2010/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
java.io.IOException: File /tmp/hadoop-liyun2010/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
2011-05-19 10:58:39,554 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=liyun2010,users     ip=/127.0.0.1   cmd=delete      src=/tmp/hadoop-liyun2010/mapred/system/jobtracker.infodst=null        perm=null


liyun2010@test:~/hadoop-0.20.2/logs$ cat  hadoop-liyun2010-datanode-test.puppet.com.log
2011-05-19 10:58:27,372 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = test.puppet.com/127.0.0.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2011-05-19 10:58:28,932 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Registered FSDatasetStatusMBean
2011-05-19 10:58:28,938 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 50010
2011-05-19 10:58:28,942 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
2011-05-19 10:58:29,137 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2011-05-19 10:58:29,341 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50075
2011-05-19 10:58:29,342 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50075 webServer.getConnectors()[0].getLocalPort() returned 50075
2011-05-19 10:58:29,342 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075
2011-05-19 10:58:29,342 INFO org.mortbay.log: jetty-6.1.14
2011-05-19 10:58:30,600 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075
2011-05-19 10:58:30,620 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=DataNode, sessionId=null
2011-05-19 10:58:30,659 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=DataNode, port=50020
2011-05-19 10:58:30,670 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2011-05-19 10:58:30,672 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 50020: starting
2011-05-19 10:58:30,672 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration = DatanodeRegistration(test.puppet.com:50010, storageID=DS-173493047-127.0.0.1-50010-1305278767521, infoPort=50075, ipcPort=50020)
2011-05-19 10:58:30,673 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50020: starting
2011-05-19 10:58:30,689 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020: starting
2011-05-19 10:58:30,689 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2011-05-19 10:58:30,690 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-173493047-127.0.0.1-50010-1305278767521, infoPort=50075, ipcPort=50020)In DataNode.run, data = FSDataset{dirpath='/tmp/hadoop-liyun2010/dfs/data/current'}
2011-05-19 10:58:30,691 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
2011-05-19 10:58:30,774 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got processed in 23 msecs
2011-05-19 10:58:30,776 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block scanner



2011-05-19 



���S 

Re: run hadoop pseudo-distribute examples failed

Posted by Marcos Ortiz <ml...@uci.cu>.
On 05/18/2011 10:53 PM, ���S wrote:
> Hi All,
> I'm trying to run hadoop(0.20.2) examples in Pseudo-Distributed Mode
> following the hadoop user guide. After I run the 'start-all.sh', it
> seems the namenode can't connect to datanode.
> 'SSH localhost' is OK on my server. Someone advises to rm
> '/tmp/hadoop-XXXX' and format namenode again, but it doesn't work. And
> 'iptables -L' shows there is no firewall rules in my server:
>
>     test:/home/liyun2010# iptables -L
>     Chain INPUT (policy ACCEPT)
>     target prot opt source destination
>     Chain FORWARD (policy ACCEPT)
>     target prot opt source destination
>     Chain OUTPUT (policy ACCEPT)
>     target prot opt source destination
>
> Is there anyone can give me more advice? Thanks!
> Bellow is my namenode and datanode log files:
> liyun2010@test:~/hadoop-0.20.2/logs$
> <mailto:liyun2010@test:%7E/hadoop-0.20.2/logs$> cat
> hadoop-liyun2010-namenode-test.puppet.com.log
>
>     2011-05-19 10:58:25,938 INFO
>     org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>     /************************************************************
>     STARTUP_MSG: Starting NameNode
>     STARTUP_MSG: host = test.puppet.com/127.0.0.1
>     STARTUP_MSG: args = []
>     STARTUP_MSG: version = 0.20.2
>     STARTUP_MSG: build =
>     https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
>     911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>     ************************************************************/
>     2011-05-19 10:58:26,197 INFO
>     org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics
>     with hostName=NameNode, port=9000
>     2011-05-19 10:58:26,212 INFO
>     org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
>     test.puppet.com/127.0.0.1:9000
>     2011-05-19 10:58:26,220 INFO
>     org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics
>     with processName=NameNode, sessionId=null
>     2011-05-19 10:58:26,224 INFO
>     org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
>     Initializing NameNodeMeterics using context
>     object:org.apache.hadoop.metrics.spi.NullContext
>     2011-05-19 10:58:26,405 INFO
>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>     fsOwner=liyun2010,users
>     2011-05-19 10:58:26,406 INFO
>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>     supergroup=supergroup
>     2011-05-19 10:58:26,406 INFO
>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>     isPermissionEnabled=true
>     2011-05-19 10:58:26,429 INFO
>     org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: Initializing
>     FSNamesystemMetrics using context
>     object:org.apache.hadoop.metrics.spi.NullContext
>     2011-05-19 10:58:26,434 INFO
>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>     FSNamesystemStatusMBean
>     2011-05-19 10:58:26,511 INFO
>     org.apache.hadoop.hdfs.server.common.Storage: Number of files = 9
>     2011-05-19 10:58:26,524 INFO
>     org.apache.hadoop.hdfs.server.common.Storage: Number of files
>     under construction = 1
>     2011-05-19 10:58:26,530 INFO
>     org.apache.hadoop.hdfs.server.common.Storage: Image file of size
>     920 loaded in 0 seconds.
>     2011-05-19 10:58:26,606 INFO
>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Invalid
>     opcode, reached end of edit log Number of transactions found 99
>     2011-05-19 10:58:26,606 INFO
>     org.apache.hadoop.hdfs.server.common.Storage: Edits file
>     /tmp/hadoop-liyun2010/dfs/name/current/edits of size 1049092 edits
>     # 99 loaded in 0 seconds.
>     2011-05-19 10:58:26,660 INFO
>     org.apache.hadoop.hdfs.server.common.Storage: Image file of size
>     920 saved in 0 seconds.
>     2011-05-19 10:58:26,810 INFO
>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished
>     loading FSImage in 505 msecs
>     2011-05-19 10:58:26,825 INFO
>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number
>     of blocks = 0
>     2011-05-19 10:58:26,825 INFO
>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
>     invalid blocks = 0
>     2011-05-19 10:58:26,825 INFO
>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
>     under-replicated blocks = 0
>     2011-05-19 10:58:26,825 INFO
>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
>     over-replicated blocks = 0
>     2011-05-19 10:58:26,825 INFO org.apache.hadoop.hdfs.StateChange:
>     STATE* Leaving safe mode after 0 secs.
>     2011-05-19 10:58:26,826 INFO org.apache.hadoop.hdfs.StateChange:
>     STATE* Network topology has 0 racks and 0 datanodes
>     2011-05-19 10:58:26,826 INFO org.apache.hadoop.hdfs.StateChange:
>     STATE* UnderReplicatedBlocks has 0 blocks
>     2011-05-19 10:58:27,025 INFO org.mortbay.log: Logging to
>     org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>     org.mortbay.log.Slf4jLog
>     2011-05-19 10:58:27,174 INFO org.apache.hadoop.http.HttpServer:
>     Port returned by webServer.getConnectors()[0].getLocalPort()
>     before open() is -1. Opening the listener on 50070
>     2011-05-19 10:58:27,178 INFO org.apache.hadoop.http.HttpServer:
>     listener.getLocalPort() returned 50070
>     webServer.getConnectors()[0].getLocalPort() returned 50070
>     2011-05-19 10:58:27,178 INFO org.apache.hadoop.http.HttpServer:
>     Jetty bound to port 50070
>     2011-05-19 10:58:27,179 INFO org.mortbay.log: jetty-6.1.14
>     2011-05-19 10:58:27,269 WARN org.mortbay.log: Can't reuse
>     /tmp/Jetty_0_0_0_0_50070_hdfs____w2cu08, using
>     /tmp/Jetty_0_0_0_0_50070_hdfs____w2cu08_740365192444258489
>     2011-05-19 10:58:28,610 INFO org.mortbay.log: Started
>     SelectChannelConnector@0.0.0.0:50070
>     2011-05-19 10:58:28,611 INFO
>     org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
>     0.0.0.0:50070
>     2011-05-19 10:58:28,612 INFO org.apache.hadoop.ipc.Server: IPC
>     Server Responder: starting
>     2011-05-19 10:58:28,613 INFO org.apache.hadoop.ipc.Server: IPC
>     Server listener on 9000: starting
>     2011-05-19 10:58:28,617 INFO org.apache.hadoop.ipc.Server: IPC
>     Server handler 0 on 9000: starting
>     2011-05-19 10:58:28,618 INFO org.apache.hadoop.ipc.Server: IPC
>     Server handler 1 on 9000: starting
>     2011-05-19 10:58:28,621 INFO org.apache.hadoop.ipc.Server: IPC
>     Server handler 2 on 9000: starting
>     2011-05-19 10:58:28,625 INFO org.apache.hadoop.ipc.Server: IPC
>     Server handler 4 on 9000: starting
>     2011-05-19 10:58:28,625 INFO org.apache.hadoop.ipc.Server: IPC
>     Server handler 5 on 9000: starting
>     2011-05-19 10:58:28,626 INFO org.apache.hadoop.ipc.Server: IPC
>     Server handler 6 on 9000: starting
>     2011-05-19 10:58:28,627 INFO org.apache.hadoop.ipc.Server: IPC
>     Server handler 3 on 9000: starting
>     2011-05-19 10:58:28,629 INFO org.apache.hadoop.ipc.Server: IPC
>     Server handler 8 on 9000: starting
>     2011-05-19 10:58:28,630 INFO org.apache.hadoop.ipc.Server: IPC
>     Server handler 9 on 9000: starting
>     2011-05-19 10:58:28,630 INFO org.apache.hadoop.ipc.Server: IPC
>     Server handler 7 on 9000: starting
>     2011-05-19 10:58:30,680 INFO org.apache.hadoop.hdfs.StateChange:
>     BLOCK* NameSystem.registerDatanode: node registration from
>     127.0.0.1:50010 storage DS-173493047-127.0.0.1-50010-1305278767521
>     2011-05-19 10:58:30,687 INFO
>     org.apache.hadoop.net.NetworkTopology: Adding a new node:
>     /default-rack/127.0.0.1:50010
>     2011-05-19 10:58:39,361 INFO
>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
>     ugi=liyun2010,users ip=/127.0.0.1 cmd=listStatus
>     src=/tmp/hadoop-liyun2010/mapred/system dst=nullperm=null
>     2011-05-19 10:58:39,393 INFO
>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
>     ugi=liyun2010,users ip=/127.0.0.1 cmd=delete
>     src=/tmp/hadoop-liyun2010/mapred/system dst=nullperm=null
>     2011-05-19 10:58:39,405 INFO
>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
>     ugi=liyun2010,users ip=/127.0.0.1 cmd=mkdirs
>     src=/tmp/hadoop-liyun2010/mapred/system
>     dst=nullperm=liyun2010:supergroup:rwxr-xr-x
>     2011-05-19 10:58:39,417 INFO
>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
>     ugi=liyun2010,users ip=/127.0.0.1 cmd=setPermission
>     src=/tmp/hadoop-liyun2010/mapred/system
>     dst=nullperm=liyun2010:supergroup:rwx-wx-wx
>     2011-05-19 10:58:39,507 INFO
>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
>     ugi=liyun2010,users ip=/127.0.0.1 cmd=create
>     src=/tmp/hadoop-liyun2010/mapred/system/jobtracker.infodst=null
>     perm=liyun2010:supergroup:rw-r--r--
>     2011-05-19 10:58:39,530 INFO
>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
>     ugi=liyun2010,users ip=/127.0.0.1 cmd=setPermission
>     src=/tmp/hadoop-liyun2010/mapred/system/jobtracker.info dst=null
>     perm=liyun2010:supergroup:rw-------
>     2011-05-19 10:58:39,538 WARN
>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to
>     place enough replicas, still in need of 1
>     2011-05-19 10:58:39,541 INFO org.apache.hadoop.ipc.Server: IPC
>     Server handler 7 on 9000, call
>     addBlock(/tmp/hadoop-liyun2010/mapred/system/jobtracker.info,
>     DFSClient_1143649887) from 127.0.0.1:56940: error:
>     java.io.IOException: File
>     /tmp/hadoop-liyun2010/mapred/system/jobtracker.info could only be
>     replicated to 0 nodes, instead of 1
>     java.io.IOException: File
>     /tmp/hadoop-liyun2010/mapred/system/jobtracker.info could only be
>     replicated to 0 nodes, instead of 1
>     at
>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
>     at
>     org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
>     sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
>     sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:396)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>     2011-05-19 10:58:39,554 INFO
>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
>     ugi=liyun2010,users ip=/127.0.0.1 cmd=delete
>     src=/tmp/hadoop-liyun2010/mapred/system/jobtracker.infodst=null
>     perm=null
>
> liyun2010@test:~/hadoop-0.20.2/logs$ cat
> hadoop-liyun2010-datanode-test.puppet.com.log
> 2011-05-19 10:58:27,372 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG: host = test.puppet.com/127.0.0.1
> STARTUP_MSG: args = []
> STARTUP_MSG: version = 0.20.2
> STARTUP_MSG: build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
> ************************************************************/
> 2011-05-19 10:58:28,932 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
> FSDatasetStatusMBean
> 2011-05-19 10:58:28,938 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at
> 50010
> 2011-05-19 10:58:28,942 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
> 1048576 bytes/s
> 2011-05-19 10:58:29,137 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2011-05-19 10:58:29,341 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open()
> is -1. Opening the listener on 50075
> 2011-05-19 10:58:29,342 INFO org.apache.hadoop.http.HttpServer:
> listener.getLocalPort() returned 50075
> webServer.getConnectors()[0].getLocalPort() returned 50075
> 2011-05-19 10:58:29,342 INFO org.apache.hadoop.http.HttpServer: Jetty
> bound to port 50075
> 2011-05-19 10:58:29,342 INFO org.mortbay.log: jetty-6.1.14
> 2011-05-19 10:58:30,600 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50075
> 2011-05-19 10:58:30,620 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=DataNode, sessionId=null
> 2011-05-19 10:58:30,659 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=DataNode, port=50020
> 2011-05-19 10:58:30,670 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2011-05-19 10:58:30,672 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 50020: starting
> 2011-05-19 10:58:30,672 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
> DatanodeRegistration(test.puppet.com:50010,
> storageID=DS-173493047-127.0.0.1-50010-1305278767521, infoPort=50075,
> ipcPort=50020)
> 2011-05-19 10:58:30,673 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 50020: starting
> 2011-05-19 10:58:30,689 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 50020: starting
> 2011-05-19 10:58:30,689 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 50020: starting
> 2011-05-19 10:58:30,690 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(127.0.0.1:50010,
> storageID=DS-173493047-127.0.0.1-50010-1305278767521, infoPort=50075,
> ipcPort=50020)In DataNode.run, data =
> FSDataset{dirpath='/tmp/hadoop-liyun2010/dfs/data/current'}
> 2011-05-19 10:58:30,691 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: using
> BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
> 2011-05-19 10:58:30,774 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
> blocks got processed in 23 msecs
> 2011-05-19 10:58:30,776 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic
> block scanner
> 2011-05-19
> ------------------------------------------------------------------------
> ���S
Why don't you change the dfs dir from /tmp to another directory, for
example /usr/share/hadoop/dfs?
Can you attach your configuration files to inspect them?

Regards

-- 
Marcos Lu��s Ort��z Valmaseda
 Software Engineer (Large-Scaled Distributed Systems)
 University of Information Sciences,
 La Habana, Cuba
 Linux User # 418229
 http://about.me/marcosortiz 


Re: run hadoop pseudo-distribute examples failed

Posted by Kumar Kandasami <ku...@gmail.com>.
http://knowledgedonor.blogspot.com/2011/05/installing-cloudera-hadoop-hadoop-0202.html

Kumar    _/|\_
www.saisk.com
kumar@saisk.com
"making a profound difference with knowledge and creativity..."


2011/5/20 Kumar Kandasami <ku...@gmail.com>

> I have been setting up pseudo cluster on MAC - and the solution to the
> problem - you can try deleting all the data in the directory, and formating
> the namenode.
> see my blog outling - some of the issues/solutions that I have encountered
> in setting up a pseudo-distribute cluster on my MAC that might be helpful -
> http://knowledgedonor.blogspot.com/
>
>
> *"liyun2010/mapred/system/jobtracker.info could only be replicated to 0
> nodes, instead of 1
> java.io.IOException: File /tmp/hadoop-liyun2010/mapred/system/
> jobtracker.info could only be replicated to 0 nodes, instead of 1"*
>  *
> *Kumar    _/|\_
> www.saisk.com
> kumar@saisk.com
> "making a profound difference with knowledge and creativity..."
>
>
> 2011/5/20 Marcos Ortiz <ml...@uci.cu>
>
>>  On 05/19/2011 10:35 PM, ���S wrote:
>>
>> Hi Marcos,
>> Thanks for your reply.
>>
>> The temporary directory '/tmp/hadoop-xxx' is defined in hadoop core jar's
>> configuration file "*core-default.xml*". Do u think this may cause the
>> failure? Bellow is the detail config:
>>
>>  <property>
>>   <name>hadoop.tmp.dir</name>
>>   <value>/tmp/hadoop-${user.name}</value>
>>   <description>A base for other temporary directories.</description>
>> </property>
>>
>>
>> And what's the other config files do u need? Almostly, I didn't modify any
>> configuration after downloading the hadoop-0.20.2 files, I think those
>> configuration are all the default values.
>>
>> Yes, those are the default values, but I think that you can test with
>> another directory because this is a temporary directory , and it can be
>> erased easy.
>> For example, when you use the CDH3, the default value there is
>> /var/lib/hadoop-0.20.2/cache/${user.name}, which is more convenient.
>> Of course, it's a recommendation.
>> You can search the Lars Francke's Blog (http://blog.lars-francke.de/)
>> where he did a excellent work explaining the manual installation of a Hadoop
>> Cluster.
>>
>> Regards
>>
>>
>>
>> 2011-05-20
>> ------------------------------
>>  ���S
>> ------------------------------
>> *�����ˣ�* Marcos Ortiz
>> *����ʱ�䣺* 2011-05-19  20:40:06
>> *�ռ��ˣ�* mapreduce-user
>> *���ͣ�* ���S
>> *���⣺* Re: run hadoop pseudo-distribute examples failed
>>  On 05/18/2011 10:53 PM, ���S wrote:
>>
>> Hi All,
>>
>> I'm trying to run hadoop(0.20.2) examples in Pseudo-Distributed Mode
>> following the hadoop user guide. After I run the 'start-all.sh', it seems
>> the namenode can't connect to datanode.
>>
>> 'SSH localhost' is OK on my server. Someone advises to rm
>> '/tmp/hadoop-XXXX' and format namenode again, but it doesn't work.  And
>> 'iptables -L' shows there is no firewall rules in my server:
>>
>> test:/home/liyun2010# iptables -L
>>  Chain INPUT (policy ACCEPT)
>> target     prot opt source               destination
>> Chain FORWARD (policy ACCEPT)
>> target     prot opt source               destination
>> Chain OUTPUT (policy ACCEPT)
>> target     prot opt source               destination
>>
>> Is there anyone can give me more advice? Thanks!
>>
>> Bellow is my namenode and datanode log files:
>> liyun2010@test:~/hadoop-0.20.2/logs$
>>  cat hadoop-liyun2010-namenode-test.puppet.com.log
>>
>>
>> 2011-05-19 10:58:25,938 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting NameNode
>> STARTUP_MSG:   host = test.puppet.com/127.0.0.1
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.20.2
>> STARTUP_MSG:   build =
>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20
>>  -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>> ************************************************************/
>>
>> 2011-05-19 10:58:26,197 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=NameNode, port=9000
>>
>> 2011-05-19 10:58:26,212 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
>> test.puppet.com/127.0.0.1:9000
>>
>> 2011-05-19 10:58:26,220 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null
>>
>> 2011-05-19 10:58:26,224 INFO org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
>>
>> 2011-05-19 10:58:26,405 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=liyun2010,users
>>
>> 2011-05-19 10:58:26,406 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>
>> 2011-05-19 10:58:26,406 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
>>
>> 2011-05-19 10:58:26,429 INFO org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
>>
>> 2011-05-19 10:58:26,434 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStatusMBean
>>
>> 2011-05-19 10:58:26,511 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 9
>>
>> 2011-05-19 10:58:26,524 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 1
>>
>> 2011-05-19 10:58:26,530 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 920 loaded in 0 seconds.
>>
>> 2011-05-19 10:58:26,606 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Invalid opcode, reached end of edit log Number of transactions found 99
>>
>> 2011-05-19 10:58:26,606 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /tmp/hadoop-liyun2010/dfs/name/current/edits of size 1049092 edits # 99 loaded in 0 seconds.
>>
>> 2011-05-19 10:58:26,660 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 920 saved in 0 seconds.
>>
>> 2011-05-19 10:58:26,810 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 505 msecs
>>
>> 2011-05-19 10:58:26,825 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 0
>>
>> 2011-05-19 10:58:26,825 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0
>>
>> 2011-05-19 10:58:26,825 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 0
>>
>> 2011-05-19 10:58:26,825 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of  over-replicated blocks = 0
>>
>> 2011-05-19 10:58:26,825 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs.
>>
>> 2011-05-19 10:58:26,826 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
>>
>> 2011-05-19 10:58:26,826 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
>>
>> 2011-05-19 10:58:27,025 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
>>
>> 2011-05-19 10:58:27,174 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
>>
>> 2011-05-19 10:58:27,178 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070
>>
>> 2011-05-19 10:58:27,178 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070
>> 2011-05-19 10:58:27,179 INFO org.mortbay.log: jetty-6.1.14
>>
>> 2011-05-19 10:58:27,269 WARN org.mortbay.log: Can't reuse /tmp/Jetty_0_0_0_0_50070_hdfs____w2cu08, using /tmp/Jetty_0_0_0_0_50070_hdfs____w2cu08_740365192444258489
>> 2011-05-19 10:58:28,610 INFO org.mortbay.log: Started
>> SelectChannelConnector@0.0.0.0:50070
>>
>> 2011-05-19 10:58:28,611 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
>> 0.0.0.0:50070
>>
>> 2011-05-19 10:58:28,612 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
>>
>> 2011-05-19 10:58:28,613 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting
>>
>> 2011-05-19 10:58:28,617 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9000: starting
>>
>> 2011-05-19 10:58:28,618 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9000: starting
>>
>> 2011-05-19 10:58:28,621 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9000: starting
>>
>> 2011-05-19 10:58:28,625 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9000: starting
>>
>> 2011-05-19 10:58:28,625 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9000: starting
>>
>> 2011-05-19 10:58:28,626 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9000: starting
>>
>> 2011-05-19 10:58:28,627 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9000: starting
>>
>> 2011-05-19 10:58:28,629 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9000: starting
>>
>> 2011-05-19 10:58:28,630 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9000: starting
>>
>> 2011-05-19 10:58:28,630 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9000: starting
>>
>> 2011-05-19 10:58:30,680 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from
>> 127.0.0.1:50010 storage DS-173493047-127.0.0.1-50010-1305278767521
>>
>> 2011-05-19 10:58:30,687 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/
>> 127.0.0.1:50010
>>
>> 2011-05-19 10:58:39,361 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=liyun2010,users     ip=/
>> 127.0.0.1
>>    cmd=listStatus  src=/tmp/hadoop-liyun2010/mapred/system dst=nullperm=null
>>
>> 2011-05-19 10:58:39,393 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=liyun2010,users     ip=/
>> 127.0.0.1
>>    cmd=delete      src=/tmp/hadoop-liyun2010/mapred/system dst=nullperm=null
>>
>> 2011-05-19 10:58:39,405 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=liyun2010,users     ip=/
>> 127.0.0.1
>>    cmd=mkdirs      src=/tmp/hadoop-liyun2010/mapred/system dst=nullperm=liyun2010:supergroup:rwxr-xr-x
>>
>> 2011-05-19 10:58:39,417 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=liyun2010,users     ip=/
>> 127.0.0.1
>>    cmd=setPermission       src=/tmp/hadoop-liyun2010/mapred/system dst=nullperm=liyun2010:supergroup:rwx-wx-wx
>>
>> 2011-05-19 10:58:39,507 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=liyun2010,users     ip=/
>> 127.0.0.1
>>    cmd=create      src=/tmp/hadoop-liyun2010/mapred/system/jobtracker.infodst=null        perm=liyun2010:supergroup:rw-r--r--
>>
>> 2011-05-19 10:58:39,530 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=liyun2010,users     ip=/
>> 127.0.0.1
>>    cmd=setPermission       src=/tmp/hadoop-liyun2010/mapred/system/
>> jobtracker.info dst=null        perm=liyun2010:supergroup:rw-------
>>
>> 2011-05-19 10:58:39,538 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place enough replicas, still in need of 1
>>
>> 2011-05-19 10:58:39,541 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9000, call addBlock(/tmp/hadoop-liyun2010/mapred/system/
>> jobtracker.info, DFSClient_1143649887) from 127.0.0.1:56940
>> : error: java.io.IOException: File /tmp/hadoop-liyun2010/mapred/system/
>> jobtracker.info could only be replicated to 0 nodes, instead of 1
>> java.io.IOException: File /tmp/hadoop-liyun2010/mapred/system/
>> jobtracker.info could only be replicated to 0 nodes, instead of 1
>>
>>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
>>
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>
>>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>
>>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>         at java.lang.reflect.Method.invoke(Method.java:597)
>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>>
>> 2011-05-19 10:58:39,554 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=liyun2010,users     ip=/
>> 127.0.0.1
>>    cmd=delete      src=/tmp/hadoop-liyun2010/mapred/system/jobtracker.infodst=null        perm=null
>>
>>
>>
>>  liyun2010@test:~/hadoop-0.20.2/logs$
>>  cat  hadoop-liyun2010-datanode-test.puppet.com.log
>>
>> 2011-05-19 10:58:27,372 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting DataNode
>> STARTUP_MSG:   host = test.puppet.com/127.0.0.1
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.20.2
>> STARTUP_MSG:   build =
>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20
>>  -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>> ************************************************************/
>>
>> 2011-05-19 10:58:28,932 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Registered FSDatasetStatusMBean
>>
>> 2011-05-19 10:58:28,938 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 50010
>>
>> 2011-05-19 10:58:28,942 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
>>
>> 2011-05-19 10:58:29,137 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
>>
>> 2011-05-19 10:58:29,341 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50075
>>
>> 2011-05-19 10:58:29,342 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50075 webServer.getConnectors()[0].getLocalPort() returned 50075
>>
>> 2011-05-19 10:58:29,342 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075
>> 2011-05-19 10:58:29,342 INFO org.mortbay.log: jetty-6.1.14
>> 2011-05-19 10:58:30,600 INFO org.mortbay.log: Started
>> SelectChannelConnector@0.0.0.0:50075
>>
>> 2011-05-19 10:58:30,620 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=DataNode, sessionId=null
>>
>> 2011-05-19 10:58:30,659 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=DataNode, port=50020
>>
>> 2011-05-19 10:58:30,670 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
>>
>> 2011-05-19 10:58:30,672 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 50020: starting
>>
>> 2011-05-19 10:58:30,672 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration = DatanodeRegistration(
>> test.puppet.com:50010
>> , storageID=DS-173493047-127.0.0.1-50010-1305278767521, infoPort=50075, ipcPort=50020)
>>
>> 2011-05-19 10:58:30,673 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50020: starting
>>
>> 2011-05-19 10:58:30,689 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020: starting
>>
>> 2011-05-19 10:58:30,689 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
>>
>> 2011-05-19 10:58:30,690 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
>> 127.0.0.1:50010
>> , storageID=DS-173493047-127.0.0.1-50010-1305278767521, infoPort=50075, ipcPort=50020)In DataNode.run, data = FSDataset{dirpath='/tmp/hadoop-liyun2010/dfs/data/current'}
>>
>> 2011-05-19 10:58:30,691 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
>>
>> 2011-05-19 10:58:30,774 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got processed in 23 msecs
>>
>> 2011-05-19 10:58:30,776 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block scanner
>>
>>
>>
>> 2011-05-19
>> ------------------------------
>> ���S
>>
>> Why don't you change the dfs dir from /tmp to another directory, for
>> example /usr/share/hadoop/dfs?
>> Can you attach your configuration files to inspect them?
>>
>> Regards
>>
>> --
>> Marcos Lu��s Ort��z Valmaseda
>>  Software Engineer (Large-Scaled Distributed Systems)
>>  University of Information Sciences,
>>  La Habana, Cuba
>>  Linux User # 418229
>>  http://about.me/marcosortiz
>>
>>
>>
>> --
>> Marcos Lu��s Ort��z Valmaseda
>>  Software Engineer (Large-Scaled Distributed Systems)
>>  University of Information Sciences,
>>  La Habana, Cuba
>>  Linux User # 418229
>>  http://about.me/marcosortiz
>>
>>
>

Re: run hadoop pseudo-distribute examples failed

Posted by Kumar Kandasami <ku...@gmail.com>.
I have been setting up pseudo cluster on MAC - and the solution to the
problem - you can try deleting all the data in the directory, and formating
the namenode.
see my blog outling - some of the issues/solutions that I have encountered
in setting up a pseudo-distribute cluster on my MAC that might be helpful -
http://knowledgedonor.blogspot.com/

*"liyun2010/mapred/system/jobtracker.info could only be replicated to 0
nodes, instead of 1
java.io.IOException: File /tmp/hadoop-liyun2010/mapred/system/
jobtracker.info could only be replicated to 0 nodes, instead of 1"*
*
*Kumar    _/|\_
www.saisk.com
kumar@saisk.com
"making a profound difference with knowledge and creativity..."


2011/5/20 Marcos Ortiz <ml...@uci.cu>

>  On 05/19/2011 10:35 PM, ���S wrote:
>
> Hi Marcos,
> Thanks for your reply.
>
> The temporary directory '/tmp/hadoop-xxx' is defined in hadoop core jar's
> configuration file "*core-default.xml*". Do u think this may cause the
> failure? Bellow is the detail config:
>
>  <property>
>   <name>hadoop.tmp.dir</name>
>   <value>/tmp/hadoop-${user.name}</value>
>   <description>A base for other temporary directories.</description>
> </property>
>
>
> And what's the other config files do u need? Almostly, I didn't modify any
> configuration after downloading the hadoop-0.20.2 files, I think those
> configuration are all the default values.
>
> Yes, those are the default values, but I think that you can test with
> another directory because this is a temporary directory , and it can be
> erased easy.
> For example, when you use the CDH3, the default value there is
> /var/lib/hadoop-0.20.2/cache/${user.name}, which is more convenient.
> Of course, it's a recommendation.
> You can search the Lars Francke's Blog (http://blog.lars-francke.de/)
> where he did a excellent work explaining the manual installation of a Hadoop
> Cluster.
>
> Regards
>
>
>
> 2011-05-20
> ------------------------------
>  ���S
> ------------------------------
> *�����ˣ�* Marcos Ortiz
> *����ʱ�䣺* 2011-05-19  20:40:06
> *�ռ��ˣ�* mapreduce-user
> *���ͣ�* ���S
> *���⣺* Re: run hadoop pseudo-distribute examples failed
>  On 05/18/2011 10:53 PM, ���S wrote:
>
> Hi All,
>
> I'm trying to run hadoop(0.20.2) examples in Pseudo-Distributed Mode
> following the hadoop user guide. After I run the 'start-all.sh', it seems
> the namenode can't connect to datanode.
>
> 'SSH localhost' is OK on my server. Someone advises to rm
> '/tmp/hadoop-XXXX' and format namenode again, but it doesn't work.  And
> 'iptables -L' shows there is no firewall rules in my server:
>
> test:/home/liyun2010# iptables -L
>  Chain INPUT (policy ACCEPT)
> target     prot opt source               destination
> Chain FORWARD (policy ACCEPT)
> target     prot opt source               destination
> Chain OUTPUT (policy ACCEPT)
> target     prot opt source               destination
>
> Is there anyone can give me more advice? Thanks!
>
> Bellow is my namenode and datanode log files:
> liyun2010@test:~/hadoop-0.20.2/logs$
>  cat hadoop-liyun2010-namenode-test.puppet.com.log
>
>
> 2011-05-19 10:58:25,938 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = test.puppet.com/127.0.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20
>  -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
> ************************************************************/
>
> 2011-05-19 10:58:26,197 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=NameNode, port=9000
>
> 2011-05-19 10:58:26,212 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
> test.puppet.com/127.0.0.1:9000
>
> 2011-05-19 10:58:26,220 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null
>
> 2011-05-19 10:58:26,224 INFO org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
>
> 2011-05-19 10:58:26,405 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=liyun2010,users
>
> 2011-05-19 10:58:26,406 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>
> 2011-05-19 10:58:26,406 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
>
> 2011-05-19 10:58:26,429 INFO org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
>
> 2011-05-19 10:58:26,434 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStatusMBean
>
> 2011-05-19 10:58:26,511 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 9
>
> 2011-05-19 10:58:26,524 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 1
>
> 2011-05-19 10:58:26,530 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 920 loaded in 0 seconds.
>
> 2011-05-19 10:58:26,606 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Invalid opcode, reached end of edit log Number of transactions found 99
>
> 2011-05-19 10:58:26,606 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /tmp/hadoop-liyun2010/dfs/name/current/edits of size 1049092 edits # 99 loaded in 0 seconds.
>
> 2011-05-19 10:58:26,660 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 920 saved in 0 seconds.
>
> 2011-05-19 10:58:26,810 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 505 msecs
>
> 2011-05-19 10:58:26,825 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 0
>
> 2011-05-19 10:58:26,825 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0
>
> 2011-05-19 10:58:26,825 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 0
>
> 2011-05-19 10:58:26,825 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of  over-replicated blocks = 0
>
> 2011-05-19 10:58:26,825 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs.
>
> 2011-05-19 10:58:26,826 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
>
> 2011-05-19 10:58:26,826 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
>
> 2011-05-19 10:58:27,025 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
>
> 2011-05-19 10:58:27,174 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
>
> 2011-05-19 10:58:27,178 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070
>
> 2011-05-19 10:58:27,178 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070
> 2011-05-19 10:58:27,179 INFO org.mortbay.log: jetty-6.1.14
>
> 2011-05-19 10:58:27,269 WARN org.mortbay.log: Can't reuse /tmp/Jetty_0_0_0_0_50070_hdfs____w2cu08, using /tmp/Jetty_0_0_0_0_50070_hdfs____w2cu08_740365192444258489
> 2011-05-19 10:58:28,610 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50070
>
> 2011-05-19 10:58:28,611 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> 0.0.0.0:50070
>
> 2011-05-19 10:58:28,612 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
>
> 2011-05-19 10:58:28,613 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting
>
> 2011-05-19 10:58:28,617 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9000: starting
>
> 2011-05-19 10:58:28,618 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9000: starting
>
> 2011-05-19 10:58:28,621 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9000: starting
>
> 2011-05-19 10:58:28,625 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9000: starting
>
> 2011-05-19 10:58:28,625 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9000: starting
>
> 2011-05-19 10:58:28,626 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9000: starting
>
> 2011-05-19 10:58:28,627 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9000: starting
>
> 2011-05-19 10:58:28,629 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9000: starting
>
> 2011-05-19 10:58:28,630 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9000: starting
>
> 2011-05-19 10:58:28,630 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9000: starting
>
> 2011-05-19 10:58:30,680 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from
> 127.0.0.1:50010 storage DS-173493047-127.0.0.1-50010-1305278767521
>
> 2011-05-19 10:58:30,687 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/
> 127.0.0.1:50010
>
> 2011-05-19 10:58:39,361 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=liyun2010,users     ip=/
> 127.0.0.1
>    cmd=listStatus  src=/tmp/hadoop-liyun2010/mapred/system dst=nullperm=null
>
> 2011-05-19 10:58:39,393 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=liyun2010,users     ip=/
> 127.0.0.1
>    cmd=delete      src=/tmp/hadoop-liyun2010/mapred/system dst=nullperm=null
>
> 2011-05-19 10:58:39,405 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=liyun2010,users     ip=/
> 127.0.0.1
>    cmd=mkdirs      src=/tmp/hadoop-liyun2010/mapred/system dst=nullperm=liyun2010:supergroup:rwxr-xr-x
>
> 2011-05-19 10:58:39,417 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=liyun2010,users     ip=/
> 127.0.0.1
>    cmd=setPermission       src=/tmp/hadoop-liyun2010/mapred/system dst=nullperm=liyun2010:supergroup:rwx-wx-wx
>
> 2011-05-19 10:58:39,507 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=liyun2010,users     ip=/
> 127.0.0.1
>    cmd=create      src=/tmp/hadoop-liyun2010/mapred/system/jobtracker.infodst=null        perm=liyun2010:supergroup:rw-r--r--
>
> 2011-05-19 10:58:39,530 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=liyun2010,users     ip=/
> 127.0.0.1
>    cmd=setPermission       src=/tmp/hadoop-liyun2010/mapred/system/
> jobtracker.info dst=null        perm=liyun2010:supergroup:rw-------
>
> 2011-05-19 10:58:39,538 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place enough replicas, still in need of 1
>
> 2011-05-19 10:58:39,541 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9000, call addBlock(/tmp/hadoop-liyun2010/mapred/system/
> jobtracker.info, DFSClient_1143649887) from 127.0.0.1:56940
> : error: java.io.IOException: File /tmp/hadoop-liyun2010/mapred/system/
> jobtracker.info could only be replicated to 0 nodes, instead of 1
> java.io.IOException: File /tmp/hadoop-liyun2010/mapred/system/
> jobtracker.info could only be replicated to 0 nodes, instead of 1
>
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
>
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>
> 2011-05-19 10:58:39,554 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=liyun2010,users     ip=/
> 127.0.0.1
>    cmd=delete      src=/tmp/hadoop-liyun2010/mapred/system/jobtracker.infodst=null        perm=null
>
>
>
>  liyun2010@test:~/hadoop-0.20.2/logs$
>  cat  hadoop-liyun2010-datanode-test.puppet.com.log
>
> 2011-05-19 10:58:27,372 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = test.puppet.com/127.0.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20
>  -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
> ************************************************************/
>
> 2011-05-19 10:58:28,932 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Registered FSDatasetStatusMBean
>
> 2011-05-19 10:58:28,938 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 50010
>
> 2011-05-19 10:58:28,942 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
>
> 2011-05-19 10:58:29,137 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
>
> 2011-05-19 10:58:29,341 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50075
>
> 2011-05-19 10:58:29,342 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50075 webServer.getConnectors()[0].getLocalPort() returned 50075
>
> 2011-05-19 10:58:29,342 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075
> 2011-05-19 10:58:29,342 INFO org.mortbay.log: jetty-6.1.14
> 2011-05-19 10:58:30,600 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50075
>
> 2011-05-19 10:58:30,620 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=DataNode, sessionId=null
>
> 2011-05-19 10:58:30,659 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=DataNode, port=50020
>
> 2011-05-19 10:58:30,670 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
>
> 2011-05-19 10:58:30,672 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 50020: starting
>
> 2011-05-19 10:58:30,672 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration = DatanodeRegistration(
> test.puppet.com:50010
> , storageID=DS-173493047-127.0.0.1-50010-1305278767521, infoPort=50075, ipcPort=50020)
>
> 2011-05-19 10:58:30,673 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50020: starting
>
> 2011-05-19 10:58:30,689 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020: starting
>
> 2011-05-19 10:58:30,689 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
>
> 2011-05-19 10:58:30,690 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> 127.0.0.1:50010
> , storageID=DS-173493047-127.0.0.1-50010-1305278767521, infoPort=50075, ipcPort=50020)In DataNode.run, data = FSDataset{dirpath='/tmp/hadoop-liyun2010/dfs/data/current'}
>
> 2011-05-19 10:58:30,691 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
>
> 2011-05-19 10:58:30,774 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got processed in 23 msecs
>
> 2011-05-19 10:58:30,776 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block scanner
>
>
>
> 2011-05-19
> ------------------------------
> ���S
>
> Why don't you change the dfs dir from /tmp to another directory, for
> example /usr/share/hadoop/dfs?
> Can you attach your configuration files to inspect them?
>
> Regards
>
> --
> Marcos Lu��s Ort��z Valmaseda
>  Software Engineer (Large-Scaled Distributed Systems)
>  University of Information Sciences,
>  La Habana, Cuba
>  Linux User # 418229
>  http://about.me/marcosortiz
>
>
>
> --
> Marcos Lu��s Ort��z Valmaseda
>  Software Engineer (Large-Scaled Distributed Systems)
>  University of Information Sciences,
>  La Habana, Cuba
>  Linux User # 418229
>  http://about.me/marcosortiz
>
>

Re: run hadoop pseudo-distribute examples failed

Posted by Marcos Ortiz <ml...@uci.cu>.
On 05/19/2011 10:35 PM, ���S wrote:
> Hi Marcos,
> Thanks for your reply.
> The temporary directory '/tmp/hadoop-xxx' is defined in hadoop core
> jar's configuration file "*core-default.xml*". Do u think this may
> cause the failure? Bellow is the detail config:
>
>     <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/tmp/hadoop-${user.name}</value>
>     <description>A base for other temporary directories.</description>
>     </property>
>
> And what's the other config files do u need? Almostly, I didn't modify
> any configuration after downloading the hadoop-0.20.2 files, I think
> those configuration are all the default values.
Yes, those are the default values, but I think that you can test with
another directory because this is a temporary directory , and it can be
erased easy.
For example, when you use the CDH3, the default value there is
/var/lib/hadoop-0.20.2/cache/${user.name}, which is more convenient.
Of course, it's a recommendation.
You can search the Lars Francke's Blog (http://blog.lars-francke.de/)
where he did a excellent work explaining the manual installation of a
Hadoop Cluster.

Regards

> 2011-05-20
> ------------------------------------------------------------------------
> ���S
> ------------------------------------------------------------------------
> *�����ˣ�* Marcos Ortiz
> *����ʱ�䣺* 2011-05-19 20:40:06
> *�ռ��ˣ�* mapreduce-user
> *���ͣ�* ���S
> *���⣺* Re: run hadoop pseudo-distribute examples failed
> On 05/18/2011 10:53 PM, ���S wrote:
>> Hi All,
>> I'm trying to run hadoop(0.20.2) examples in Pseudo-Distributed Mode
>> following the hadoop user guide. After I run the 'start-all.sh', it
>> seems the namenode can't connect to datanode.
>> 'SSH localhost' is OK on my server. Someone advises to rm
>> '/tmp/hadoop-XXXX' and format namenode again, but it doesn't work.
>> And 'iptables -L' shows there is no firewall rules in my server:
>>
>>     test:/home/liyun2010# iptables -L
>>     Chain INPUT (policy ACCEPT)
>>     target prot opt source destination
>>     Chain FORWARD (policy ACCEPT)
>>     target prot opt source destination
>>     Chain OUTPUT (policy ACCEPT)
>>     target prot opt source destination
>>
>> Is there anyone can give me more advice? Thanks!
>> Bellow is my namenode and datanode log files:
>> liyun2010@test:~/hadoop-0.20.2/logs$
>> <mailto:liyun2010@test:%7E/hadoop-0.20.2/logs$> cat
>> hadoop-liyun2010-namenode-test.puppet.com.log
>>
>>     2011-05-19 10:58:25,938 INFO
>>     org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>     /************************************************************
>>     STARTUP_MSG: Starting NameNode
>>     STARTUP_MSG: host = test.puppet.com/127.0.0.1
>>     STARTUP_MSG: args = []
>>     STARTUP_MSG: version = 0.20.2
>>     STARTUP_MSG: build =
>>     https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20
>>     -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>>     ************************************************************/
>>     2011-05-19 10:58:26,197 INFO
>>     org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC
>>     Metrics with hostName=NameNode, port=9000
>>     2011-05-19 10:58:26,212 INFO
>>     org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
>>     test.puppet.com/127.0.0.1:9000
>>     2011-05-19 10:58:26,220 INFO
>>     org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM
>>     Metrics with processName=NameNode, sessionId=null
>>     2011-05-19 10:58:26,224 INFO
>>     org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
>>     Initializing NameNodeMeterics using context
>>     object:org.apache.hadoop.metrics.spi.NullContext
>>     2011-05-19 10:58:26,405 INFO
>>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>     fsOwner=liyun2010,users
>>     2011-05-19 10:58:26,406 INFO
>>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>     supergroup=supergroup
>>     2011-05-19 10:58:26,406 INFO
>>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>     isPermissionEnabled=true
>>     2011-05-19 10:58:26,429 INFO
>>     org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
>>     Initializing FSNamesystemMetrics using context
>>     object:org.apache.hadoop.metrics.spi.NullContext
>>     2011-05-19 10:58:26,434 INFO
>>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>     FSNamesystemStatusMBean
>>     2011-05-19 10:58:26,511 INFO
>>     org.apache.hadoop.hdfs.server.common.Storage: Number of files = 9
>>     2011-05-19 10:58:26,524 INFO
>>     org.apache.hadoop.hdfs.server.common.Storage: Number of files
>>     under construction = 1
>>     2011-05-19 10:58:26,530 INFO
>>     org.apache.hadoop.hdfs.server.common.Storage: Image file of size
>>     920 loaded in 0 seconds.
>>     2011-05-19 10:58:26,606 INFO
>>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Invalid
>>     opcode, reached end of edit log Number of transactions found 99
>>     2011-05-19 10:58:26,606 INFO
>>     org.apache.hadoop.hdfs.server.common.Storage: Edits file
>>     /tmp/hadoop-liyun2010/dfs/name/current/edits of size 1049092
>>     edits # 99 loaded in 0 seconds.
>>     2011-05-19 10:58:26,660 INFO
>>     org.apache.hadoop.hdfs.server.common.Storage: Image file of size
>>     920 saved in 0 seconds.
>>     2011-05-19 10:58:26,810 INFO
>>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished
>>     loading FSImage in 505 msecs
>>     2011-05-19 10:58:26,825 INFO
>>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number
>>     of blocks = 0
>>     2011-05-19 10:58:26,825 INFO
>>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
>>     invalid blocks = 0
>>     2011-05-19 10:58:26,825 INFO
>>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
>>     under-replicated blocks = 0
>>     2011-05-19 10:58:26,825 INFO
>>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
>>     over-replicated blocks = 0
>>     2011-05-19 10:58:26,825 INFO org.apache.hadoop.hdfs.StateChange:
>>     STATE* Leaving safe mode after 0 secs.
>>     2011-05-19 10:58:26,826 INFO org.apache.hadoop.hdfs.StateChange:
>>     STATE* Network topology has 0 racks and 0 datanodes
>>     2011-05-19 10:58:26,826 INFO org.apache.hadoop.hdfs.StateChange:
>>     STATE* UnderReplicatedBlocks has 0 blocks
>>     2011-05-19 10:58:27,025 INFO org.mortbay.log: Logging to
>>     org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>>     org.mortbay.log.Slf4jLog
>>     2011-05-19 10:58:27,174 INFO org.apache.hadoop.http.HttpServer:
>>     Port returned by webServer.getConnectors()[0].getLocalPort()
>>     before open() is -1. Opening the listener on 50070
>>     2011-05-19 10:58:27,178 INFO org.apache.hadoop.http.HttpServer:
>>     listener.getLocalPort() returned 50070
>>     webServer.getConnectors()[0].getLocalPort() returned 50070
>>     2011-05-19 10:58:27,178 INFO org.apache.hadoop.http.HttpServer:
>>     Jetty bound to port 50070
>>     2011-05-19 10:58:27,179 INFO org.mortbay.log: jetty-6.1.14
>>     2011-05-19 10:58:27,269 WARN org.mortbay.log: Can't reuse
>>     /tmp/Jetty_0_0_0_0_50070_hdfs____w2cu08, using
>>     /tmp/Jetty_0_0_0_0_50070_hdfs____w2cu08_740365192444258489
>>     2011-05-19 10:58:28,610 INFO org.mortbay.log: Started
>>     SelectChannelConnector@0.0.0.0:50070
>>     2011-05-19 10:58:28,611 INFO
>>     org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up
>>     at: 0.0.0.0:50070
>>     2011-05-19 10:58:28,612 INFO org.apache.hadoop.ipc.Server: IPC
>>     Server Responder: starting
>>     2011-05-19 10:58:28,613 INFO org.apache.hadoop.ipc.Server: IPC
>>     Server listener on 9000: starting
>>     2011-05-19 10:58:28,617 INFO org.apache.hadoop.ipc.Server: IPC
>>     Server handler 0 on 9000: starting
>>     2011-05-19 10:58:28,618 INFO org.apache.hadoop.ipc.Server: IPC
>>     Server handler 1 on 9000: starting
>>     2011-05-19 10:58:28,621 INFO org.apache.hadoop.ipc.Server: IPC
>>     Server handler 2 on 9000: starting
>>     2011-05-19 10:58:28,625 INFO org.apache.hadoop.ipc.Server: IPC
>>     Server handler 4 on 9000: starting
>>     2011-05-19 10:58:28,625 INFO org.apache.hadoop.ipc.Server: IPC
>>     Server handler 5 on 9000: starting
>>     2011-05-19 10:58:28,626 INFO org.apache.hadoop.ipc.Server: IPC
>>     Server handler 6 on 9000: starting
>>     2011-05-19 10:58:28,627 INFO org.apache.hadoop.ipc.Server: IPC
>>     Server handler 3 on 9000: starting
>>     2011-05-19 10:58:28,629 INFO org.apache.hadoop.ipc.Server: IPC
>>     Server handler 8 on 9000: starting
>>     2011-05-19 10:58:28,630 INFO org.apache.hadoop.ipc.Server: IPC
>>     Server handler 9 on 9000: starting
>>     2011-05-19 10:58:28,630 INFO org.apache.hadoop.ipc.Server: IPC
>>     Server handler 7 on 9000: starting
>>     2011-05-19 10:58:30,680 INFO org.apache.hadoop.hdfs.StateChange:
>>     BLOCK* NameSystem.registerDatanode: node registration from
>>     127.0.0.1:50010 storage DS-173493047-127.0.0.1-50010-1305278767521
>>     2011-05-19 10:58:30,687 INFO
>>     org.apache.hadoop.net.NetworkTopology: Adding a new node:
>>     /default-rack/127.0.0.1:50010
>>     2011-05-19 10:58:39,361 INFO
>>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
>>     ugi=liyun2010,users ip=/127.0.0.1 cmd=listStatus
>>     src=/tmp/hadoop-liyun2010/mapred/system dst=nullperm=null
>>     2011-05-19 10:58:39,393 INFO
>>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
>>     ugi=liyun2010,users ip=/127.0.0.1 cmd=delete
>>     src=/tmp/hadoop-liyun2010/mapred/system dst=nullperm=null
>>     2011-05-19 10:58:39,405 INFO
>>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
>>     ugi=liyun2010,users ip=/127.0.0.1 cmd=mkdirs
>>     src=/tmp/hadoop-liyun2010/mapred/system
>>     dst=nullperm=liyun2010:supergroup:rwxr-xr-x
>>     2011-05-19 10:58:39,417 INFO
>>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
>>     ugi=liyun2010,users ip=/127.0.0.1 cmd=setPermission
>>     src=/tmp/hadoop-liyun2010/mapred/system
>>     dst=nullperm=liyun2010:supergroup:rwx-wx-wx
>>     2011-05-19 10:58:39,507 INFO
>>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
>>     ugi=liyun2010,users ip=/127.0.0.1 cmd=create
>>     src=/tmp/hadoop-liyun2010/mapred/system/jobtracker.infodst=null
>>     perm=liyun2010:supergroup:rw-r--r--
>>     2011-05-19 10:58:39,530 INFO
>>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
>>     ugi=liyun2010,users ip=/127.0.0.1 cmd=setPermission
>>     src=/tmp/hadoop-liyun2010/mapred/system/jobtracker.info dst=null
>>     perm=liyun2010:supergroup:rw-------
>>     2011-05-19 10:58:39,538 WARN
>>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to
>>     place enough replicas, still in need of 1
>>     2011-05-19 10:58:39,541 INFO org.apache.hadoop.ipc.Server: IPC
>>     Server handler 7 on 9000, call
>>     addBlock(/tmp/hadoop-liyun2010/mapred/system/jobtracker.info,
>>     DFSClient_1143649887) from 127.0.0.1:56940: error:
>>     java.io.IOException: File
>>     /tmp/hadoop-liyun2010/mapred/system/jobtracker.info could only be
>>     replicated to 0 nodes, instead of 1
>>     java.io.IOException: File
>>     /tmp/hadoop-liyun2010/mapred/system/jobtracker.info could only be
>>     replicated to 0 nodes, instead of 1
>>     at
>>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
>>     at
>>     org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>     at
>>     sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>     at
>>     sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>     at java.lang.reflect.Method.invoke(Method.java:597)
>>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java:396)
>>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>>     2011-05-19 10:58:39,554 INFO
>>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
>>     ugi=liyun2010,users ip=/127.0.0.1 cmd=delete
>>     src=/tmp/hadoop-liyun2010/mapred/system/jobtracker.infodst=null
>>     perm=null
>>
>> liyun2010@test:~/hadoop-0.20.2/logs$ cat
>> hadoop-liyun2010-datanode-test.puppet.com.log
>> 2011-05-19 10:58:27,372 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting DataNode
>> STARTUP_MSG: host = test.puppet.com/127.0.0.1
>> STARTUP_MSG: args = []
>> STARTUP_MSG: version = 0.20.2
>> STARTUP_MSG: build =
>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20
>> -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>> ************************************************************/
>> 2011-05-19 10:58:28,932 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
>> FSDatasetStatusMBean
>> 2011-05-19 10:58:28,938 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server
>> at 50010
>> 2011-05-19 10:58:28,942 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith
>> is 1048576 bytes/s
>> 2011-05-19 10:58:29,137 INFO org.mortbay.log: Logging to
>> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>> org.mortbay.log.Slf4jLog
>> 2011-05-19 10:58:29,341 INFO org.apache.hadoop.http.HttpServer: Port
>> returned by webServer.getConnectors()[0].getLocalPort() before open()
>> is -1. Opening the listener on 50075
>> 2011-05-19 10:58:29,342 INFO org.apache.hadoop.http.HttpServer:
>> listener.getLocalPort() returned 50075
>> webServer.getConnectors()[0].getLocalPort() returned 50075
>> 2011-05-19 10:58:29,342 INFO org.apache.hadoop.http.HttpServer: Jetty
>> bound to port 50075
>> 2011-05-19 10:58:29,342 INFO org.mortbay.log: jetty-6.1.14
>> 2011-05-19 10:58:30,600 INFO org.mortbay.log: Started
>> SelectChannelConnector@0.0.0.0:50075
>> 2011-05-19 10:58:30,620 INFO
>> org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics
>> with processName=DataNode, sessionId=null
>> 2011-05-19 10:58:30,659 INFO
>> org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics
>> with hostName=DataNode, port=50020
>> 2011-05-19 10:58:30,670 INFO org.apache.hadoop.ipc.Server: IPC Server
>> Responder: starting
>> 2011-05-19 10:58:30,672 INFO org.apache.hadoop.ipc.Server: IPC Server
>> handler 0 on 50020: starting
>> 2011-05-19 10:58:30,672 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
>> DatanodeRegistration(test.puppet.com:50010,
>> storageID=DS-173493047-127.0.0.1-50010-1305278767521, infoPort=50075,
>> ipcPort=50020)
>> 2011-05-19 10:58:30,673 INFO org.apache.hadoop.ipc.Server: IPC Server
>> handler 1 on 50020: starting
>> 2011-05-19 10:58:30,689 INFO org.apache.hadoop.ipc.Server: IPC Server
>> handler 2 on 50020: starting
>> 2011-05-19 10:58:30,689 INFO org.apache.hadoop.ipc.Server: IPC Server
>> listener on 50020: starting
>> 2011-05-19 10:58:30,690 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode:
>> DatanodeRegistration(127.0.0.1:50010,
>> storageID=DS-173493047-127.0.0.1-50010-1305278767521, infoPort=50075,
>> ipcPort=50020)In DataNode.run, data =
>> FSDataset{dirpath='/tmp/hadoop-liyun2010/dfs/data/current'}
>> 2011-05-19 10:58:30,691 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: using
>> BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
>> 2011-05-19 10:58:30,774 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
>> blocks got processed in 23 msecs
>> 2011-05-19 10:58:30,776 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic
>> block scanner
>> 2011-05-19
>> ------------------------------------------------------------------------
>> ���S
> Why don't you change the dfs dir from /tmp to another directory, for
> example /usr/share/hadoop/dfs?
> Can you attach your configuration files to inspect them?
>
> Regards
>
> -- 
> Marcos Lu��s Ort��z Valmaseda
>  Software Engineer (Large-Scaled Distributed Systems)
>  University of Information Sciences,
>  La Habana, Cuba
>  Linux User # 418229
>  http://about.me/marcosortiz 


-- 
Marcos Lu��s Ort��z Valmaseda
 Software Engineer (Large-Scaled Distributed Systems)
 University of Information Sciences,
 La Habana, Cuba
 Linux User # 418229
 http://about.me/marcosortiz 


Re: Re: run hadoop pseudo-distribute examples failed

Posted by 李S <li...@corp.netease.com>.
Hi Marcos,
Thanks for your reply. 

The temporary directory '/tmp/hadoop-xxx' is defined in hadoop core jar's configuration file "core-default.xml". Do u think this may cause the failure? Bellow is the detail config:
<property>
  <name>hadoop.tmp.dir</name>
  <value>/tmp/hadoop-${user.name}</value>
  <description>A base for other temporary directories.</description>
</property>

And what's the other config files do u need? Almostly, I didn't modify any configuration after downloading the hadoop-0.20.2 files, I think those configuration are all the default values.

2011-05-20 



���S 



�����ˣ� Marcos Ortiz 
����ʱ�䣺 2011-05-19  20:40:06 
�ռ��ˣ� mapreduce-user 
���ͣ� ���S 
���⣺ Re: run hadoop pseudo-distribute examples failed 
 
On 05/18/2011 10:53 PM, ���S wrote: 
Hi All,

I'm trying to run hadoop(0.20.2) examples in Pseudo-Distributed Mode following the hadoop user guide. After I run the 'start-all.sh', it seems the namenode can't connect to datanode.

'SSH localhost' is OK on my server. Someone advises to rm '/tmp/hadoop-XXXX' and format namenode again, but it doesn't work.  And 'iptables -L' shows there is no firewall rules in my server:
test:/home/liyun2010# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination    
Is there anyone can give me more advice? Thanks!

Bellow is my namenode and datanode log files:
liyun2010@test:~/hadoop-0.20.2/logs$ cat hadoop-liyun2010-namenode-test.puppet.com.log
2011-05-19 10:58:25,938 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = test.puppet.com/127.0.0.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2011-05-19 10:58:26,197 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=NameNode, port=9000
2011-05-19 10:58:26,212 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: test.puppet.com/127.0.0.1:9000
2011-05-19 10:58:26,220 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null
2011-05-19 10:58:26,224 INFO org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
2011-05-19 10:58:26,405 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=liyun2010,users
2011-05-19 10:58:26,406 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2011-05-19 10:58:26,406 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2011-05-19 10:58:26,429 INFO org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
2011-05-19 10:58:26,434 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStatusMBean
2011-05-19 10:58:26,511 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 9
2011-05-19 10:58:26,524 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 1
2011-05-19 10:58:26,530 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 920 loaded in 0 seconds.
2011-05-19 10:58:26,606 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Invalid opcode, reached end of edit log Number of transactions found 99
2011-05-19 10:58:26,606 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /tmp/hadoop-liyun2010/dfs/name/current/edits of size 1049092 edits # 99 loaded in 0 seconds.
2011-05-19 10:58:26,660 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 920 saved in 0 seconds.
2011-05-19 10:58:26,810 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 505 msecs
2011-05-19 10:58:26,825 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 0
2011-05-19 10:58:26,825 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0
2011-05-19 10:58:26,825 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 0
2011-05-19 10:58:26,825 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of  over-replicated blocks = 0
2011-05-19 10:58:26,825 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs.
2011-05-19 10:58:26,826 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2011-05-19 10:58:26,826 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2011-05-19 10:58:27,025 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2011-05-19 10:58:27,174 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
2011-05-19 10:58:27,178 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070
2011-05-19 10:58:27,178 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070
2011-05-19 10:58:27,179 INFO org.mortbay.log: jetty-6.1.14
2011-05-19 10:58:27,269 WARN org.mortbay.log: Can't reuse /tmp/Jetty_0_0_0_0_50070_hdfs____w2cu08, using /tmp/Jetty_0_0_0_0_50070_hdfs____w2cu08_740365192444258489
2011-05-19 10:58:28,610 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50070
2011-05-19 10:58:28,611 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070
2011-05-19 10:58:28,612 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2011-05-19 10:58:28,613 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting
2011-05-19 10:58:28,617 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9000: starting
2011-05-19 10:58:28,618 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9000: starting
2011-05-19 10:58:28,621 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9000: starting
2011-05-19 10:58:28,625 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9000: starting
2011-05-19 10:58:28,625 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9000: starting
2011-05-19 10:58:28,626 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9000: starting
2011-05-19 10:58:28,627 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9000: starting
2011-05-19 10:58:28,629 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9000: starting
2011-05-19 10:58:28,630 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9000: starting
2011-05-19 10:58:28,630 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9000: starting
2011-05-19 10:58:30,680 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:50010 storage DS-173493047-127.0.0.1-50010-1305278767521
2011-05-19 10:58:30,687 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/127.0.0.1:50010
2011-05-19 10:58:39,361 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=liyun2010,users     ip=/127.0.0.1   cmd=listStatus  src=/tmp/hadoop-liyun2010/mapred/system dst=nullperm=null
2011-05-19 10:58:39,393 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=liyun2010,users     ip=/127.0.0.1   cmd=delete      src=/tmp/hadoop-liyun2010/mapred/system dst=nullperm=null
2011-05-19 10:58:39,405 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=liyun2010,users     ip=/127.0.0.1   cmd=mkdirs      src=/tmp/hadoop-liyun2010/mapred/system dst=nullperm=liyun2010:supergroup:rwxr-xr-x
2011-05-19 10:58:39,417 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=liyun2010,users     ip=/127.0.0.1   cmd=setPermission       src=/tmp/hadoop-liyun2010/mapred/system dst=nullperm=liyun2010:supergroup:rwx-wx-wx
2011-05-19 10:58:39,507 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=liyun2010,users     ip=/127.0.0.1   cmd=create      src=/tmp/hadoop-liyun2010/mapred/system/jobtracker.infodst=null        perm=liyun2010:supergroup:rw-r--r--
2011-05-19 10:58:39,530 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=liyun2010,users     ip=/127.0.0.1   cmd=setPermission       src=/tmp/hadoop-liyun2010/mapred/system/jobtracker.info dst=null        perm=liyun2010:supergroup:rw-------
2011-05-19 10:58:39,538 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place enough replicas, still in need of 1
2011-05-19 10:58:39,541 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9000, call addBlock(/tmp/hadoop-liyun2010/mapred/system/jobtracker.info, DFSClient_1143649887) from 127.0.0.1:56940: error: java.io.IOException: File /tmp/hadoop-liyun2010/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
java.io.IOException: File /tmp/hadoop-liyun2010/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
2011-05-19 10:58:39,554 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=liyun2010,users     ip=/127.0.0.1   cmd=delete      src=/tmp/hadoop-liyun2010/mapred/system/jobtracker.infodst=null        perm=null


liyun2010@test:~/hadoop-0.20.2/logs$ cat  hadoop-liyun2010-datanode-test.puppet.com.log
2011-05-19 10:58:27,372 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = test.puppet.com/127.0.0.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2011-05-19 10:58:28,932 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Registered FSDatasetStatusMBean
2011-05-19 10:58:28,938 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 50010
2011-05-19 10:58:28,942 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
2011-05-19 10:58:29,137 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2011-05-19 10:58:29,341 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50075
2011-05-19 10:58:29,342 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50075 webServer.getConnectors()[0].getLocalPort() returned 50075
2011-05-19 10:58:29,342 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075
2011-05-19 10:58:29,342 INFO org.mortbay.log: jetty-6.1.14
2011-05-19 10:58:30,600 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075
2011-05-19 10:58:30,620 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=DataNode, sessionId=null
2011-05-19 10:58:30,659 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=DataNode, port=50020
2011-05-19 10:58:30,670 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2011-05-19 10:58:30,672 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 50020: starting
2011-05-19 10:58:30,672 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration = DatanodeRegistration(test.puppet.com:50010, storageID=DS-173493047-127.0.0.1-50010-1305278767521, infoPort=50075, ipcPort=50020)
2011-05-19 10:58:30,673 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50020: starting
2011-05-19 10:58:30,689 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020: starting
2011-05-19 10:58:30,689 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2011-05-19 10:58:30,690 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-173493047-127.0.0.1-50010-1305278767521, infoPort=50075, ipcPort=50020)In DataNode.run, data = FSDataset{dirpath='/tmp/hadoop-liyun2010/dfs/data/current'}
2011-05-19 10:58:30,691 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
2011-05-19 10:58:30,774 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got processed in 23 msecs
2011-05-19 10:58:30,776 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block scanner



2011-05-19 



���S 
Why don't you change the dfs dir from /tmp to another directory, for example /usr/share/hadoop/dfs?
Can you attach your configuration files to inspect them?

Regards


-- 
Marcos Lu��s Ort��z Valmaseda
 Software Engineer (Large-Scaled Distributed Systems)
 University of Information Sciences,
 La Habana, Cuba
 Linux User # 418229
 http://about.me/marcosortiz