You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by David Ginzburg <gi...@hotmail.com> on 2010/02/25 17:22:57 UTC

slave dies on a Virtual machine


Hi,
I have two physical machines running windows OS (a windows 7 and a windows xp)I have installed VirtualBox  on the two machines and ubuntu karmic VM images having jdk 6.up18 and hadoop.I followed the instructions on http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Multi-Node_Cluster) using the two linux VMs
The hadoop folders or any file are not on an NFS folder.The replication factor is 2.

When I run start-all.sh the second data node on the remote slave machine dies, I have attached the log files.
10.10.1.168 runs a nameNode and a dataNode 10.10.1.29 runs only a dataNode
Exceptions from name node log:2010-02-25 17:57:43,395 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop,hadoop,adm,dialout,cdrom,plugdev,lpadmin,admin,sambashare	ip=/10.10.1.168	cmd=listStatus	src=/home/hadoop/hadoop/tmp/mapred/system	dst=null	perm=null2010-02-25 17:57:43,401 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 54310, call delete(/home/hadoop/hadoop/tmp/mapred/system, true) from 10.10.1.168:40976: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /home/hadoop/hadoop/tmp/mapred/system. Name node is in safe mode.The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 22 seconds.org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /home/hadoop/hadoop/tmp/mapred/system. Name node is in safe mode.The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 22 seconds.	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1696)	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1676)	at org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:517)	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)	at java.lang.reflect.Method.invoke(Method.java:597)	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)	at java.security.AccessController.doPrivileged(Native Method)	at javax.security.auth.Subject.doAs(Subject.java:396)	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)2010-02-25 17:57:53,413 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop,hadoop,adm,dialout,cdrom,plugdev,lpadmin,admin,sambashare	ip=/10.10.1.168	cmd=listStatus	src=/home/hadoop/hadoop/tmp/mapred/system	dst=null	perm=null2010-02-25 17:57:53,417 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 54310, call delete(/home/hadoop/hadoop/tmp/mapred/system, true) from 10.10.1.168:40977: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /home/hadoop/hadoop/tmp/mapred/system. Name node is in safe mode.The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 12 seconds.org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /home/hadoop/hadoop/tmp/mapred/system. Name node is in safe mode.The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 12 seconds.	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1696)	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1676)	at org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:517)	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)	at java.lang.reflect.Method.invoke(Method.java:597)	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)	at java.security.AccessController.doPrivileged(Native Method)	at javax.security.auth.Subject.doAs(Subject.java:396)	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)


2010-02-25 17:57:17,376 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 54310: starting2010-02-25 17:57:29,778 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 10.10.1.29:50010 storage DS-1854135747-127.0.1.1-50010-12670230245952010-02-25 17:57:29,798 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/10.10.1.29:500102010-02-25 17:57:29,860 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.processReport: block blk_-7905823761682315237_1483 on 10.10.1.29:50010 size 4 does not belong to any file.2010-02-25 17:57:29,860 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_-7905823761682315237 is added to invalidSet of 10.10.1.29:500102010-02-25 17:57:35,869 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node 10.10.1.29:50010 is replaced by 10.10.1.168:50010 with the same storageID DS-1854135747-127.0.1.1-50010-12670230245952010-02-25 17:57:35,869 INFO org.apache.hadoop.net.NetworkTopology: Removing a node: /default-rack/10.10.1.29:50010
....

-50010-1267023024595. Node 10.10.1.168:50010 is expected to serve this storage.org.apache.hadoop.hdfs.protocol.UnregisteredDatanodeException: Data node 10.10.1.29:50010 is attempting to report storage ID DS-1854135747-127.0.1.1-50010-1267023024595. Node 10.10.1.168:50010 is expected to serve this storage.	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDatanode(FSNamesystem.java:3914)	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.processReport(FSNamesystem.java:2885)	at org.apache.hadoop.hdfs.server.namenode.NameNode.blockReport(NameNode.java:715)	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)	at java.lang.reflect.Method.invoke(Method.java:597)	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)	at java.security.AccessController.doPrivileged(Native Method)	at javax.security.auth.Subject.doAs(Subject.java:396)	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)2010-02-25 17:57:43,395 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop,hadoop,adm,dialout,cdrom,plugdev,lpadmin,admin,sambashare	ip=/10.10.1.168	cmd=listStatus	src=/home/hadoop/hadoop/tmp/mapred/system	dst=null	perm=null2010-02-25 17:57:43,401 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 54310, call delete(/home/hadoop/hadoop/tmp/mapred/system, true) from 10.10.1.168:40976: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /home/hadoop/hadoop/tmp/mapred/system. Name node is in safe mode.The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 22 seconds.org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /home/hadoop/hadoop/tmp/mapred/system. Name node is in safe mode.The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 22 seconds.	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1696)	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1676)	at org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:517)	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)	at java.lang.reflect.Method.invoke(Method.java:597)	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)	at java.security.AccessController.doPrivileged(Native Method)	at javax.security.auth.Subject.doAs(Subject.java:396)	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)




 		 	   		  
_________________________________________________________________
Hotmail: Powerful Free email with security by Microsoft.
http://clk.atdmt.com/GBL/go/201469230/direct/01/

RE: slave dies on a Virtual machine

Posted by David Ginzburg <gi...@hotmail.com>.
Hi,The issue was resolved by formatting the namenode.I think my image contained the hdfs temp folder and that caused the problem.When I try now to a copy file using  ~/hadoop-0.20.1$ hadoop fs -copyToLocal ~/Desktop/OReilly.Hadoop.The.Definitive.Guide.Jun.2009.pdf hadoop.pdfcopyToLocal: null
what is the meaning of the null  print?mkdir worked though

Date: Fri, 26 Feb 2010 15:41:21 -0500
Subject: Re: slave dies on a Virtual machine
From: danimus.g.prime@gmail.com
To: hdfs-user@hadoop.apache.org

Did you find what was the issue?
Maybe you did not configure the two VMs for cluster (as per Michael Noll's pages).
You should run first bin/start-dfs.sh on your NameNode and then bin/start-mapred.sh from Secondary Node instead of start-all.sh.


Dan

On Thu, Feb 25, 2010 at 11:22 AM, David Ginzburg <gi...@hotmail.com> wrote:







Hi,
I have two physical machines running windows OS (a windows 7 and a windows xp)I have installed VirtualBox  on the two machines and ubuntu karmic VM images having jdk 6.up18 and hadoop.
I followed the instructions on http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Multi-Node_Cluster) using the two linux VMs

The hadoop folders or any file are not on an NFS folder.The replication factor is 2.

When I run start-all.sh the second data node on the remote slave machine dies, I have attached the log files.

10.10.1.168 runs a nameNode and a dataNode 10.10.1.29 runs only a dataNode
Exceptions from name node log:2010-02-25 17:57:43,395 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop,hadoop,adm,dialout,cdrom,plugdev,lpadmin,admin,sambashare	ip=/10.10.1.168	cmd=listStatus	src=/home/hadoop/hadoop/tmp/mapred/system	dst=null	perm=null
2010-02-25 17:57:43,401 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 54310, call delete(/home/hadoop/hadoop/tmp/mapred/system, true) from 10.10.1.168:40976: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /home/hadoop/hadoop/tmp/mapred/system. Name node is in safe mode.
The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 22 seconds.org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /home/hadoop/hadoop/tmp/mapred/system. Name node is in safe mode.
The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 22 seconds.	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1696)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1676)	at org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:517)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)2010-02-25 17:57:53,413 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop,hadoop,adm,dialout,cdrom,plugdev,lpadmin,admin,sambashare	ip=/10.10.1.168	cmd=listStatus	src=/home/hadoop/hadoop/tmp/mapred/system	dst=null	perm=null
2010-02-25 17:57:53,417 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 54310, call delete(/home/hadoop/hadoop/tmp/mapred/system, true) from 10.10.1.168:40977: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /home/hadoop/hadoop/tmp/mapred/system. Name node is in safe mode.
The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 12 seconds.org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /home/hadoop/hadoop/tmp/mapred/system. Name node is in safe mode.
The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 12 seconds.	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1696)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1676)	at org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:517)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)



2010-02-25 17:57:17,376 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 54310: starting
2010-02-25 17:57:29,778 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 10.10.1.29:50010 storage DS-1854135747-127.0.1.1-50010-1267023024595
2010-02-25 17:57:29,798 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/10.10.1.29:500102010-02-25 17:57:29,860 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.processReport: block blk_-7905823761682315237_1483 on 10.10.1.29:50010 size 4 does not belong to any file.
2010-02-25 17:57:29,860 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_-7905823761682315237 is added to invalidSet of 10.10.1.29:50010
2010-02-25 17:57:35,869 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node 10.10.1.29:50010 is replaced by 10.10.1.168:50010 with the same storageID DS-1854135747-127.0.1.1-50010-1267023024595
2010-02-25 17:57:35,869 INFO org.apache.hadoop.net.NetworkTopology: Removing a node: /default-rack/10.10.1.29:50010
....


-50010-1267023024595. Node 10.10.1.168:50010 is expected to serve this storage.org.apache.hadoop.hdfs.protocol.UnregisteredDatanodeException: Data node 10.10.1.29:50010 is attempting to report storage ID DS-1854135747-127.0.1.1-50010-1267023024595. Node 10.10.1.168:50010 is expected to serve this storage.
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDatanode(FSNamesystem.java:3914)	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.processReport(FSNamesystem.java:2885)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.blockReport(NameNode.java:715)	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
	at java.security.AccessController.doPrivileged(Native Method)	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)2010-02-25 17:57:43,395 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop,hadoop,adm,dialout,cdrom,plugdev,lpadmin,admin,sambashare	ip=/10.10.1.168	cmd=listStatus	src=/home/hadoop/hadoop/tmp/mapred/system	dst=null	perm=null
2010-02-25 17:57:43,401 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 54310, call delete(/home/hadoop/hadoop/tmp/mapred/system, true) from 10.10.1.168:40976: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /home/hadoop/hadoop/tmp/mapred/system. Name node is in safe mode.
The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 22 seconds.org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /home/hadoop/hadoop/tmp/mapred/system. Name node is in safe mode.
The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 22 seconds.	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1696)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1676)	at org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:517)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)





 		 	   		  
Hotmail: Powerful Free email with security by Microsoft. Get it now.


 		 	   		  
_________________________________________________________________
Hotmail: Trusted email with powerful SPAM protection.
http://clk.atdmt.com/GBL/go/201469227/direct/01/

Re: slave dies on a Virtual machine

Posted by Dan Fundatureanu <da...@gmail.com>.
Did you find what was the issue?
Maybe you did not configure the two VMs for cluster (as per Michael Noll's
pages).
You should run first bin/start-dfs.sh on your NameNode and then
bin/start-mapred.sh from Secondary Node instead of start-all.sh.

Dan

On Thu, Feb 25, 2010 at 11:22 AM, David Ginzburg <gi...@hotmail.com>wrote:

>
> Hi,
>
> I have two physical machines running windows OS (a windows 7 and a windows
> xp)
> I have installed VirtualBox  on the two machines and ubuntu karmic VM
> images having jdk 6.up18 and hadoop.
> I followed the instructions on
> http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Multi-Node_Cluster)<http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_%28Multi-Node_Cluster%29> using
> the two linux VMs
>
> The hadoop folders or any file are not on an NFS folder.
> The replication factor is 2.
>
>
> When I run start-all.sh the second data node on the remote slave machine
> dies, I have attached the log files.
>
> 10.10.1.168 runs a nameNode and a dataNode 10.10.1.29 runs only a dataNode
>
> Exceptions from name node log:
> *
> 2010-02-25 17:57:43,395 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> ugi=hadoop,hadoop,adm,dialout,cdrom,plugdev,lpadmin,admin,sambashare ip=/
> 10.10.1.168 cmd=listStatus src=/home/hadoop/hadoop/tmp/mapred/system
> dst=null perm=null
> 2010-02-25 17:57:43,401 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 54310, call delete(/home/hadoop/hadoop/tmp/mapred/system, true)
> from 10.10.1.168:40976: error:
> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete
> /home/hadoop/hadoop/tmp/mapred/system. Name node is in safe mode.
> The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe
> mode will be turned off automatically in 22 seconds.
> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete
> /home/hadoop/hadoop/tmp/mapred/system. Name node is in safe mode.
> The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe
> mode will be turned off automatically in 22 seconds.
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1696)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1676)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:517)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> 2010-02-25 17:57:53,413 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> ugi=hadoop,hadoop,adm,dialout,cdrom,plugdev,lpadmin,admin,sambashare ip=/
> 10.10.1.168 cmd=listStatus src=/home/hadoop/hadoop/tmp/mapred/system
> dst=null perm=null
> 2010-02-25 17:57:53,417 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 54310, call delete(/home/hadoop/hadoop/tmp/mapred/system, true)
> from 10.10.1.168:40977: error:
> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete
> /home/hadoop/hadoop/tmp/mapred/system. Name node is in safe mode.
> The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe
> mode will be turned off automatically in 12 seconds.
> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete
> /home/hadoop/hadoop/tmp/mapred/system. Name node is in safe mode.
> The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe
> mode will be turned off automatically in 12 seconds.
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1696)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1676)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:517)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>
>
>
> 2010-02-25 17:57:17,376 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 54310: starting
> 2010-02-25 17:57:29,778 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.registerDatanode: node registration from 10.10.1.29:50010storage DS-1854135747-127.0.1.1-50010-1267023024595
> 2010-02-25 17:57:29,798 INFO org.apache.hadoop.net.NetworkTopology: Adding
> a new node: /default-rack/10.10.1.29:50010
> 2010-02-25 17:57:29,860 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.processReport: block blk_-7905823761682315237_1483 on
> 10.10.1.29:50010 size 4 does not belong to any file.
> 2010-02-25 17:57:29,860 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.addToInvalidates: blk_-7905823761682315237 is added to invalidSet
> of 10.10.1.29:50010
> 2010-02-25 17:57:35,869 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.registerDatanode: node 10.10.1.29:50010 is replaced by
> 10.10.1.168:50010 with the same storageID
> DS-1854135747-127.0.1.1-50010-1267023024595
> 2010-02-25 17:57:35,869 INFO org.apache.hadoop.net.NetworkTopology:
> Removing a node: /default-rack/10.10.1.29:50010
>
> ....
>
>
> -50010-1267023024595. Node 10.10.1.168:50010 is expected to serve this
> storage.
> org.apache.hadoop.hdfs.protocol.UnregisteredDatanodeException: Data node
> 10.10.1.29:50010 is attempting to report storage ID
> DS-1854135747-127.0.1.1-50010-1267023024595. Node 10.10.1.168:50010 is
> expected to serve this storage.
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDatanode(FSNamesystem.java:3914)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.processReport(FSNamesystem.java:2885)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.blockReport(NameNode.java:715)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> 2010-02-25 17:57:43,395 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> ugi=hadoop,hadoop,adm,dialout,cdrom,plugdev,lpadmin,admin,sambashare ip=/
> 10.10.1.168 cmd=listStatus src=/home/hadoop/hadoop/tmp/mapred/system
> dst=null perm=null
> 2010-02-25 17:57:43,401 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 54310, call delete(/home/hadoop/hadoop/tmp/mapred/system, true)
> from 10.10.1.168:40976: error:
> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete
> /home/hadoop/hadoop/tmp/mapred/system. Name node is in safe mode.
> The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe
> mode will be turned off automatically in 22 seconds.
> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete
> /home/hadoop/hadoop/tmp/mapred/system. Name node is in safe mode.
> The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe
> mode will be turned off automatically in 22 seconds.
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1696)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1676)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:517)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>
> *
>
>
>
>
>
> ------------------------------
> Hotmail: Powerful Free email with security by Microsoft. Get it now.<http://clk.atdmt.com/GBL/go/201469230/direct/01/>
>