You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Adarsh Sharma <ad...@orkash.com> on 2011/07/07 13:13:00 UTC

HTTP Error

Dear all,

Today I am stucked with the strange problem in the running hadoop cluster.

After starting hadoop by bin/start-all.sh, all nodes are started. But 
when I check through web UI ( MAster-Ip:50070), It shows :


    HTTP ERROR: 404

/dfshealth.jsp

RequestURI=/dfshealth.jsp

/Powered by Jetty:// <http://jetty.mortbay.org/>
/

/I check by command line that hadoop cannot able to get out of safe mode.
/

/I know , manually command to leave safe mode
/

/bin/hadoop dfsadmin -safemode leave
/

/But How can I make hadoop  run properly and what are the reasons of  
this error
/

/
Thanks
/



RE: Issue with MR code not scaling correctly with data sizes

Posted by "GOEKE, MATTHEW (AG/1000)" <ma...@monsanto.com>.
Bobby,

I am sorry for the cross post as I didn't realize that common was BCC'ed. Won't do it again :)

This morning I was able to resolve the issue after having a talk with our admin. Turns out changing configuration parms around data dirs without letting devs know is a bad thing! Thank you again for the questions as the counters confirmed for me that it actually was outputting all of my data.

Matt

From: Robert Evans [mailto:evans@yahoo-inc.com]
Sent: Friday, July 15, 2011 9:56 AM
To: mapreduce-user@hadoop.apache.org
Cc: GOEKE, MATTHEW [AG/1000]
Subject: Re: Issue with MR code not scaling correctly with data sizes

Please don't cross post.  I put common-user in BCC.

I really don't know for sure what is happening especially without the code or more to go on and debugging something remotely over e-mail is extremely difficult.  You are essentially doing a cross which is going to be very expensive no matter what you do. But I do have a few questions for you.

  1.  How large is the IDs file(s) you are using?  Have you updated the amount of heap the JVM has and the number of slots to accommodate it?
  2.  How are you storing the IDs in RAM to do the join?
  3.  Have you tried logging in your map/reduce code to verify the number of entries you expect are being loaded at each stage?
  4.  Along with that have you looked at the counters for your map./reduce program to verify that the number of records are showing flowing through the system as expected?

--Bobby

On 7/14/11 5:14 PM, "GOEKE, MATTHEW (AG/1000)" <ma...@monsanto.com> wrote:
All,

I have a MR program that I feed in a list of IDs and it generates the unique comparison set as a result. Example: if I have a list {1,2,3,4,5} then the resulting output would be {2x1, 3x2, 3x1, 4x3, 4x2, 4x1, 5x4, 5x3, 5x2, 5x1} or (n^2-n)/2 number of comparisons. My code works just fine on smaller scaled sets (I can verify less than 1000 fairly easily) but fails when I try to push the set to 10-20k IDs which is annoying when the end goal is 1-10 million.

The flow of the program is:
        1) Partition the IDs evenly, based on amount of output per value, into a set of keys equal to the number of reduce slots we currently have
        2) Use the distributed cache to push the ID file out to the various reducers
        3) In the setup of the reducer, populate an int array with the values from the ID file in distributed cache
        4) Output a comparison only if the current ID from the values iterator is greater than the current iterator through the int array

I realize that this could be done many other ways but this will be part of an oozie workflow so it made sense to just do it in MR for now. My issue is that when I try the larger sized ID files it only outputs part of resulting data set and there are no errors to be found. Part of me thinks that I need to tweak some site configuration properties, due to the size of data that is spilling to disk, but after scanning through all 3 sites I am having issues pin pointing anything I think could be causing this. I moved from reading the file from HDFS to using the distributed cache for the join read thinking that might solve my problem but there seems to be something else I am overlooking.

Any advice is greatly appreciated!

Matt
This e-mail message may contain privileged and/or confidential information, and is intended to be received only by persons entitled
to receive such information. If you have received this e-mail in error, please notify the sender immediately. Please delete it and
all attachments from any servers, hard drives or any other media. Other use of this e-mail by you is strictly prohibited.

All e-mails and attachments sent and received are subject to monitoring, reading and archival by Monsanto, including its
subsidiaries. The recipient of this e-mail is solely responsible for checking for the presence of "Viruses" or other "Malware".
Monsanto, along with its subsidiaries, accepts no liability for any damage caused by any such code transmitted by or accompanying
this e-mail or any attachment.


The information contained in this email may be subject to the export control laws and regulations of the United States, potentially
including but not limited to the Export Administration Regulations (EAR) and sanctions regulations issued by the U.S. Department of
Treasury, Office of Foreign Asset Controls (OFAC).  As a recipient of this information you are obligated to comply with all
applicable U.S. export laws and regulations.


Re: Issue with MR code not scaling correctly with data sizes

Posted by Robert Evans <ev...@yahoo-inc.com>.
Please don't cross post.  I put common-user in BCC.

I really don't know for sure what is happening especially without the code or more to go on and debugging something remotely over e-mail is extremely difficult.  You are essentially doing a cross which is going to be very expensive no matter what you do. But I do have a few questions for you.


 1.  How large is the IDs file(s) you are using?  Have you updated the amount of heap the JVM has and the number of slots to accommodate it?
 2.  How are you storing the IDs in RAM to do the join?
 3.  Have you tried logging in your map/reduce code to verify the number of entries you expect are being loaded at each stage?
 4.  Along with that have you looked at the counters for your map./reduce program to verify that the number of records are showing flowing through the system as expected?

--Bobby

On 7/14/11 5:14 PM, "GOEKE, MATTHEW (AG/1000)" <ma...@monsanto.com> wrote:

All,

I have a MR program that I feed in a list of IDs and it generates the unique comparison set as a result. Example: if I have a list {1,2,3,4,5} then the resulting output would be {2x1, 3x2, 3x1, 4x3, 4x2, 4x1, 5x4, 5x3, 5x2, 5x1} or (n^2-n)/2 number of comparisons. My code works just fine on smaller scaled sets (I can verify less than 1000 fairly easily) but fails when I try to push the set to 10-20k IDs which is annoying when the end goal is 1-10 million.

The flow of the program is:
        1) Partition the IDs evenly, based on amount of output per value, into a set of keys equal to the number of reduce slots we currently have
        2) Use the distributed cache to push the ID file out to the various reducers
        3) In the setup of the reducer, populate an int array with the values from the ID file in distributed cache
        4) Output a comparison only if the current ID from the values iterator is greater than the current iterator through the int array

I realize that this could be done many other ways but this will be part of an oozie workflow so it made sense to just do it in MR for now. My issue is that when I try the larger sized ID files it only outputs part of resulting data set and there are no errors to be found. Part of me thinks that I need to tweak some site configuration properties, due to the size of data that is spilling to disk, but after scanning through all 3 sites I am having issues pin pointing anything I think could be causing this. I moved from reading the file from HDFS to using the distributed cache for the join read thinking that might solve my problem but there seems to be something else I am overlooking.

Any advice is greatly appreciated!

Matt
This e-mail message may contain privileged and/or confidential information, and is intended to be received only by persons entitled
to receive such information. If you have received this e-mail in error, please notify the sender immediately. Please delete it and
all attachments from any servers, hard drives or any other media. Other use of this e-mail by you is strictly prohibited.

All e-mails and attachments sent and received are subject to monitoring, reading and archival by Monsanto, including its
subsidiaries. The recipient of this e-mail is solely responsible for checking for the presence of "Viruses" or other "Malware".
Monsanto, along with its subsidiaries, accepts no liability for any damage caused by any such code transmitted by or accompanying
this e-mail or any attachment.


The information contained in this email may be subject to the export control laws and regulations of the United States, potentially
including but not limited to the Export Administration Regulations (EAR) and sanctions regulations issued by the U.S. Department of
Treasury, Office of Foreign Asset Controls (OFAC).  As a recipient of this information you are obligated to comply with all
applicable U.S. export laws and regulations.



Re: Issue with MR code not scaling correctly with data sizes

Posted by Robert Evans <ev...@yahoo-inc.com>.
Please don't cross post.  I put common-user in BCC.

I really don't know for sure what is happening especially without the code or more to go on and debugging something remotely over e-mail is extremely difficult.  You are essentially doing a cross which is going to be very expensive no matter what you do. But I do have a few questions for you.


 1.  How large is the IDs file(s) you are using?  Have you updated the amount of heap the JVM has and the number of slots to accommodate it?
 2.  How are you storing the IDs in RAM to do the join?
 3.  Have you tried logging in your map/reduce code to verify the number of entries you expect are being loaded at each stage?
 4.  Along with that have you looked at the counters for your map./reduce program to verify that the number of records are showing flowing through the system as expected?

--Bobby

On 7/14/11 5:14 PM, "GOEKE, MATTHEW (AG/1000)" <ma...@monsanto.com> wrote:

All,

I have a MR program that I feed in a list of IDs and it generates the unique comparison set as a result. Example: if I have a list {1,2,3,4,5} then the resulting output would be {2x1, 3x2, 3x1, 4x3, 4x2, 4x1, 5x4, 5x3, 5x2, 5x1} or (n^2-n)/2 number of comparisons. My code works just fine on smaller scaled sets (I can verify less than 1000 fairly easily) but fails when I try to push the set to 10-20k IDs which is annoying when the end goal is 1-10 million.

The flow of the program is:
        1) Partition the IDs evenly, based on amount of output per value, into a set of keys equal to the number of reduce slots we currently have
        2) Use the distributed cache to push the ID file out to the various reducers
        3) In the setup of the reducer, populate an int array with the values from the ID file in distributed cache
        4) Output a comparison only if the current ID from the values iterator is greater than the current iterator through the int array

I realize that this could be done many other ways but this will be part of an oozie workflow so it made sense to just do it in MR for now. My issue is that when I try the larger sized ID files it only outputs part of resulting data set and there are no errors to be found. Part of me thinks that I need to tweak some site configuration properties, due to the size of data that is spilling to disk, but after scanning through all 3 sites I am having issues pin pointing anything I think could be causing this. I moved from reading the file from HDFS to using the distributed cache for the join read thinking that might solve my problem but there seems to be something else I am overlooking.

Any advice is greatly appreciated!

Matt
This e-mail message may contain privileged and/or confidential information, and is intended to be received only by persons entitled
to receive such information. If you have received this e-mail in error, please notify the sender immediately. Please delete it and
all attachments from any servers, hard drives or any other media. Other use of this e-mail by you is strictly prohibited.

All e-mails and attachments sent and received are subject to monitoring, reading and archival by Monsanto, including its
subsidiaries. The recipient of this e-mail is solely responsible for checking for the presence of "Viruses" or other "Malware".
Monsanto, along with its subsidiaries, accepts no liability for any damage caused by any such code transmitted by or accompanying
this e-mail or any attachment.


The information contained in this email may be subject to the export control laws and regulations of the United States, potentially
including but not limited to the Export Administration Regulations (EAR) and sanctions regulations issued by the U.S. Department of
Treasury, Office of Foreign Asset Controls (OFAC).  As a recipient of this information you are obligated to comply with all
applicable U.S. export laws and regulations.



Issue with MR code not scaling correctly with data sizes

Posted by "GOEKE, MATTHEW (AG/1000)" <ma...@monsanto.com>.
All,

I have a MR program that I feed in a list of IDs and it generates the unique comparison set as a result. Example: if I have a list {1,2,3,4,5} then the resulting output would be {2x1, 3x2, 3x1, 4x3, 4x2, 4x1, 5x4, 5x3, 5x2, 5x1} or (n^2-n)/2 number of comparisons. My code works just fine on smaller scaled sets (I can verify less than 1000 fairly easily) but fails when I try to push the set to 10-20k IDs which is annoying when the end goal is 1-10 million.

The flow of the program is:
	1) Partition the IDs evenly, based on amount of output per value, into a set of keys equal to the number of reduce slots we currently have
	2) Use the distributed cache to push the ID file out to the various reducers
	3) In the setup of the reducer, populate an int array with the values from the ID file in distributed cache
	4) Output a comparison only if the current ID from the values iterator is greater than the current iterator through the int array

I realize that this could be done many other ways but this will be part of an oozie workflow so it made sense to just do it in MR for now. My issue is that when I try the larger sized ID files it only outputs part of resulting data set and there are no errors to be found. Part of me thinks that I need to tweak some site configuration properties, due to the size of data that is spilling to disk, but after scanning through all 3 sites I am having issues pin pointing anything I think could be causing this. I moved from reading the file from HDFS to using the distributed cache for the join read thinking that might solve my problem but there seems to be something else I am overlooking.

Any advice is greatly appreciated!

Matt
This e-mail message may contain privileged and/or confidential information, and is intended to be received only by persons entitled
to receive such information. If you have received this e-mail in error, please notify the sender immediately. Please delete it and
all attachments from any servers, hard drives or any other media. Other use of this e-mail by you is strictly prohibited.

All e-mails and attachments sent and received are subject to monitoring, reading and archival by Monsanto, including its
subsidiaries. The recipient of this e-mail is solely responsible for checking for the presence of "Viruses" or other "Malware".
Monsanto, along with its subsidiaries, accepts no liability for any damage caused by any such code transmitted by or accompanying
this e-mail or any attachment.


The information contained in this email may be subject to the export control laws and regulations of the United States, potentially
including but not limited to the Export Administration Regulations (EAR) and sanctions regulations issued by the U.S. Department of
Treasury, Office of Foreign Asset Controls (OFAC).  As a recipient of this information you are obligated to comply with all
applicable U.S. export laws and regulations.


Issue with MR code not scaling correctly with data sizes

Posted by "GOEKE, MATTHEW (AG/1000)" <ma...@monsanto.com>.
All,

I have a MR program that I feed in a list of IDs and it generates the unique comparison set as a result. Example: if I have a list {1,2,3,4,5} then the resulting output would be {2x1, 3x2, 3x1, 4x3, 4x2, 4x1, 5x4, 5x3, 5x2, 5x1} or (n^2-n)/2 number of comparisons. My code works just fine on smaller scaled sets (I can verify less than 1000 fairly easily) but fails when I try to push the set to 10-20k IDs which is annoying when the end goal is 1-10 million.

The flow of the program is:
	1) Partition the IDs evenly, based on amount of output per value, into a set of keys equal to the number of reduce slots we currently have
	2) Use the distributed cache to push the ID file out to the various reducers
	3) In the setup of the reducer, populate an int array with the values from the ID file in distributed cache
	4) Output a comparison only if the current ID from the values iterator is greater than the current iterator through the int array

I realize that this could be done many other ways but this will be part of an oozie workflow so it made sense to just do it in MR for now. My issue is that when I try the larger sized ID files it only outputs part of resulting data set and there are no errors to be found. Part of me thinks that I need to tweak some site configuration properties, due to the size of data that is spilling to disk, but after scanning through all 3 sites I am having issues pin pointing anything I think could be causing this. I moved from reading the file from HDFS to using the distributed cache for the join read thinking that might solve my problem but there seems to be something else I am overlooking.

Any advice is greatly appreciated!

Matt
This e-mail message may contain privileged and/or confidential information, and is intended to be received only by persons entitled
to receive such information. If you have received this e-mail in error, please notify the sender immediately. Please delete it and
all attachments from any servers, hard drives or any other media. Other use of this e-mail by you is strictly prohibited.

All e-mails and attachments sent and received are subject to monitoring, reading and archival by Monsanto, including its
subsidiaries. The recipient of this e-mail is solely responsible for checking for the presence of "Viruses" or other "Malware".
Monsanto, along with its subsidiaries, accepts no liability for any damage caused by any such code transmitted by or accompanying
this e-mail or any attachment.


The information contained in this email may be subject to the export control laws and regulations of the United States, potentially
including but not limited to the Export Administration Regulations (EAR) and sanctions regulations issued by the U.S. Department of
Treasury, Office of Foreign Asset Controls (OFAC).  As a recipient of this information you are obligated to comply with all
applicable U.S. export laws and regulations.


Re: HTTP Error

Posted by Adarsh Sharma <ad...@orkash.com>.
Thanks Devraj,

I am using Hadoop-0.20.2 version. In the starting days, cluster is 
working properly. I am able to see all the web UI through web browser,
But Suddenly one day this problem arises.

How to compiled jsp files into my classpath.


Thanks


Devaraj K wrote:
> Hi Adarsh,
>
>     Which version of hadoop are you using? 
>
> If you are using 0.21 and later versions, need to set the environment
> variables HADOOP_COMMON_HOME, HADOOP_HDFS_HOME, HADOOP_MAPREDUCE_HOME
> correctly. Otherwise this problem comes.
>   


> If you are using 0.20.* version, this problem comes when the compiled jsp
> files are not coming into the java classpath.
>
>
>
> Devaraj K 
>
> -----Original Message-----
> From: Adarsh Sharma [mailto:adarsh.sharma@orkash.com] 
> Sent: Thursday, July 14, 2011 6:32 PM
> To: common-user@hadoop.apache.org
> Subject: Re: HTTP Error
>
> Any update on the HTTP Error : Still the issue remains but Hadoop is 
> functioning properly.
>
>
> Thanks
>
>
> Adarsh Sharma wrote:
>   
>> Thanks Joey I solved the problem of Safe mode by manually deleting 
>> some files ,
>>
>> bin/hadoop dfsadmin -report   , shows the all 2 nodes and safe mode 
>> gets OFF after some time. But,
>>
>> but I have no guess to solve the below error :
>>
>> WHy my web UI shows :
>>
>>     
>>>>>  HTTP ERROR: 404
>>>>>
>>>>> /dfshealth.jsp
>>>>>
>>>>> RequestURI=/dfshealth.jsp
>>>>>
>>>>> /Powered by Jetty:// <http://jetty.mortbay.org/>
>>>>> /
>>>>>           
>>
>> Any views on it. Please help
>>
>> Thanks
>>
>>
>>
>>
>> Joey Echeverria wrote:
>>     
>>> It looks like both datanodes are trying to serve data out of the smae 
>>> directory. Is there any chance that both datanodes are using the same 
>>> NFS mount for the dfs.data.dir?
>>>
>>> If not, what I would do is delete the data from ${dfs.data.dir} and 
>>> then re-format the namenode. You'll lose all of your data, hopefully 
>>> that's not a problem at this time.
>>> -Joey
>>>
>>>
>>> On Jul 8, 2011, at 0:40, Adarsh Sharma <ad...@orkash.com> wrote:
>>>
>>>  
>>>       
>>>> Thanks , Still don't understand the issue.
>>>>
>>>> My name node has repeatedly show these logs :
>>>>
>>>> 2011-07-08 09:36:31,365 INFO 
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: 
>>>> ugi=hadoop,hadoop    ip=/MAster-IP   cmd=listStatus    
>>>> src=/home/hadoop/system    dst=null    perm=null
>>>> 2011-07-08 09:36:31,367 INFO org.apache.hadoop.ipc.Server: IPC 
>>>> Server handler 2 on 9000, call delete(/home/hadoop/system, true) 
>>>> from Master-IP:53593: error: 
>>>> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot 
>>>> delete /home/hadoop/system. Name node is in safe mode.
>>>> The ratio of reported blocks 0.8293 has not reached the threshold 
>>>> 0.9990. Safe mode will be turned off automatically.
>>>> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot 
>>>> delete /home/hadoop/system. Name node is in safe mode.
>>>> The ratio of reported blocks 0.8293 has not reached the threshold 
>>>> 0.9990. Safe mode will be turned off automatically.
>>>>   at 
>>>>
>>>>         
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesys
> tem.java:1700) 
>   
>>>>   at 
>>>>
>>>>         
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java
> :1680) 
>   
>>>>   at 
>>>>
>>>>         
> org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:517) 
>   
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>   at 
>>>>
>>>>         
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
> ) 
>   
>>>>   at 
>>>>
>>>>         
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
> .java:25) 
>   
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>>>>
>>>>
>>>> And one of my data node shows the below logs :
>>>>
>>>> 2011-07-08 09:49:56,967 INFO 
>>>> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand 
>>>> action: DNA_REGISTER
>>>> 2011-07-08 09:49:59,962 WARN 
>>>> org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is 
>>>> shutting down: org.apache.hadoop.ipc.RemoteException: 
>>>> org.apache.hadoop.hdfs.protocol.UnregisteredDatanodeException: Data 
>>>> node 192.168.0.209:50010 is attempting to report storage ID 
>>>> DS-218695497-SLave_IP-50010-1303978807280. Node SLave_IP:50010 is 
>>>> expected to serve this storage.
>>>>       at 
>>>>
>>>>         
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDatanode(FSNamesystem
> .java:3920) 
>   
>>>>       at 
>>>>
>>>>         
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.processReport(FSNamesyst
> em.java:2891) 
>   
>>>>       at 
>>>>
>>>>         
> org.apache.hadoop.hdfs.server.namenode.NameNode.blockReport(NameNode.java:71
> 5) 
>   
>>>>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>       at 
>>>>
>>>>         
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
> ) 
>   
>>>>       at 
>>>>
>>>>         
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
> .java:25) 
>   
>>>>       at java.lang.reflect.Method.invoke(Method.java:597)
>>>>       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>>>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>>>>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>>>>       at java.security.AccessController.doPrivileged(Native Method)
>>>>       at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>>>>
>>>>       at org.apache.hadoop.ipc.Client.call(Client.java:740)
>>>>       at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>>>>       at $Proxy4.blockReport(Unknown Source)
>>>>       at 
>>>>
>>>>         
> org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:7
> 56) 
>   
>>>>       at 
>>>> org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1186)
>>>>       at java.lang.Thread.run(Thread.java:619)
>>>>
>>>> 2011-07-08 09:50:00,072 INFO org.apache.hadoop.ipc.Server: Stopping 
>>>> server on 50020
>>>> 2011-07-08 09:50:00,072 INFO org.apache.hadoop.ipc.Server: IPC 
>>>> Server handler 1 on 50020: exiting
>>>> 2011-07-08 09:50:00,074 INFO org.apache.hadoop.ipc.Server: IPC 
>>>> Server handler 2 on 50020: exiting
>>>> 2011-07-08 09:50:00,074 INFO org.apache.hadoop.ipc.Server: IPC 
>>>> Server handler 0 on 50020: exiting
>>>> 2011-07-08 09:50:00,076 INFO org.apache.hadoop.ipc.Server: Stopping 
>>>> IPC Server listener on 50020
>>>> 2011-07-08 09:50:00,077 INFO org.apache.hadoop.ipc.Server: Stopping 
>>>> IPC Server Responder
>>>> 2011-07-08 09:50:00,077 INFO 
>>>> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for 
>>>> threadgroup to exit, active threads is 1
>>>> 2011-07-08 09:50:00,078 WARN 
>>>> org.apache.hadoop.hdfs.server.datanode.DataNode: 
>>>> DatanodeRegistration(SLave_IP:50010, 
>>>> storageID=DS-218695497-192.168.0.209-50010-1303978807280, 
>>>> infoPort=50075, ipcPort=50020):DataXceiveServer: 
>>>> java.nio.channels.AsynchronousCloseException
>>>>       at 
>>>>
>>>>         
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptible
> Channel.java:185) 
>   
>>>>       at 
>>>>
>>>>         
> sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152) 
>   
>>>>       at 
>>>> sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
>>>>       at 
>>>>
>>>>         
> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServ
> er.java:130) 
>   
>>>>       at java.lang.Thread.run(Thread.java:619)
>>>>
>>>> 2011-07-08 09:50:00,394 INFO 
>>>> org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Exiting 
>>>> DataBlockScanner thread.
>>>> 2011-07-08 09:50:01,079 INFO 
>>>> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for 
>>>> threadgroup to exit, active threads is 0
>>>> 2011-07-08 09:50:01,183 INFO 
>>>> org.apache.hadoop.hdfs.server.datanode.DataNode: 
>>>> DatanodeRegistration(192.168.0.209:50010, 
>>>> storageID=DS-218695497-192.168.0.209-50010-1303978807280, 
>>>> infoPort=50075, ipcPort=50020):Finishing DataNode in: 
>>>> FSDataset{dirpath='/hdd1-1/data/current'}
>>>> 2011-07-08 09:50:01,183 INFO org.apache.hadoop.ipc.Server: Stopping 
>>>> server on 50020
>>>> 2011-07-08 09:50:01,183 INFO 
>>>> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for 
>>>> threadgroup to exit, active threads is 0
>>>> 2011-07-08 09:50:01,185 INFO 
>>>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>>>> /************************************************************
>>>> SHUTDOWN_MSG: Shutting down DataNode at ws14-suru-lin/
>>>>
>>>> Also my dfsdmin report shows :
>>>>
>>>> bash-3.2$ bin/hadoop dfsadmin -report
>>>> Safe mode is ON
>>>> Configured Capacity: 59069984768 (55.01 GB)
>>>> Present Capacity: 46471880704 (43.28 GB)
>>>> DFS Remaining: 45169745920 (42.07 GB)
>>>> DFS Used: 1302134784 (1.21 GB)
>>>> DFS Used%: 2.8%
>>>> Under replicated blocks: 0
>>>> Blocks with corrupt replicas: 0
>>>> Missing blocks: 0
>>>>
>>>> -------------------------------------------------
>>>> Datanodes available: 1 (1 total, 0 dead)
>>>>
>>>> Name: IP:50010
>>>> Decommission Status : Normal
>>>> Configured Capacity: 59069984768 (55.01 GB)
>>>> DFS Used: 1302134784 (1.21 GB)
>>>> Non DFS Used: 12598104064 (11.73 GB)
>>>> DFS Remaining: 45169745920(42.07 GB)
>>>> DFS Used%: 2.2%
>>>> DFS Remaining%: 76.47%
>>>> Last contact: Fri Jul 08 10:03:40 IST 2011
>>>>
>>>> But I have 2 datanodes.Safe mode is on from the last 1 hour. I know 
>>>> the command to leave it manually.
>>>> I think the problem arises due to non start up of one of my 
>>>> datanodes. How could i solve this problem .
>>>>
>>>> Also for
>>>>
>>>> HTTP ERROR: 404
>>>>
>>>> /dfshealth.jsp
>>>>
>>>> RequestURI=/dfshealth.jsp
>>>>
>>>> /Powered by Jetty:// <http://jetty.mortbay.org/> error,
>>>>
>>>> I manually check through below command at all nodes On Master :
>>>>
>>>> ash-3.2$ /usr/java/jdk1.6.0_18/bin/jps 7548 SecondaryNameNode
>>>> 7395 NameNode
>>>> 7628 JobTracker
>>>> 7713 Jps
>>>>
>>>> And also on slaves :
>>>>
>>>> [root@ws33-shiv-lin ~]# /usr/java/jdk1.6.0_20/bin/jps 5696 DataNode
>>>> 5941 Jps
>>>> 5818 TaskTracker
>>>>
>>>>
>>>>
>>>>
>>>> Thanks
>>>>
>>>>
>>>>
>>>> Jeff.Schmitz@shell.com wrote:
>>>>    
>>>>         
>>>>> Adarsh,
>>>>>
>>>>> You could also run from command line
>>>>>
>>>>> [root@xxxxxxx bin]# ./hadoop dfsadmin -report
>>>>> Configured Capacity: 1151948095488 (1.05 TB)
>>>>> Present Capacity: 1059350446080 (986.6 GB)
>>>>> DFS Remaining: 1056175992832 (983.64 GB)
>>>>> DFS Used: 3174453248 (2.96 GB)
>>>>> DFS Used%: 0.3%
>>>>> Under replicated blocks: 0
>>>>> Blocks with corrupt replicas: 0
>>>>> Missing blocks: 0
>>>>>
>>>>> -------------------------------------------------
>>>>> Datanodes available: 5 (5 total, 0 dead)
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> -----Original Message-----
>>>>> From: dhruv21@gmail.com [mailto:dhruv21@gmail.com] On Behalf Of Dhruv
>>>>> Kumar
>>>>> Sent: Thursday, July 07, 2011 10:01 AM
>>>>> To: common-user@hadoop.apache.org
>>>>> Subject: Re: HTTP Error
>>>>>
>>>>> 1) Check with jps to see if all services are functioning.
>>>>>
>>>>> 2) Have you tried appending dfshealth.jsp at the end of the URL as the
>>>>> 404
>>>>> says?
>>>>>
>>>>> Try using this:
>>>>> http://localhost:50070/dfshealth.jsp
>>>>>
>>>>>
>>>>>
>>>>> On Thu, Jul 7, 2011 at 7:13 AM, Adarsh Sharma
>>>>> <ad...@orkash.com>wrote:
>>>>>
>>>>>  
>>>>>      
>>>>>           
>>>>>> Dear all,
>>>>>>
>>>>>> Today I am stucked with the strange problem in the running hadoop
>>>>>>            
>>>>>>             
>>>>> cluster.
>>>>>  
>>>>>      
>>>>>           
>>>>>> After starting hadoop by bin/start-all.sh, all nodes are started. But
>>>>>>            
>>>>>>             
>>>>> when
>>>>>  
>>>>>      
>>>>>           
>>>>>> I check through web UI ( MAster-Ip:50070), It shows :
>>>>>>
>>>>>>
>>>>>>  HTTP ERROR: 404
>>>>>>
>>>>>> /dfshealth.jsp
>>>>>>
>>>>>> RequestURI=/dfshealth.jsp
>>>>>>
>>>>>> /Powered by Jetty:// <http://jetty.mortbay.org/>
>>>>>> /
>>>>>>
>>>>>> /I check by command line that hadoop cannot able to get out of safe
>>>>>>            
>>>>>>             
>>>>> mode.
>>>>>  
>>>>>      
>>>>>           
>>>>>> /
>>>>>>
>>>>>> /I know , manually command to leave safe mode
>>>>>> /
>>>>>>
>>>>>> /bin/hadoop dfsadmin -safemode leave
>>>>>> /
>>>>>>
>>>>>> /But How can I make hadoop  run properly and what are the reasons of
>>>>>>            
>>>>>>             
>>>>> this
>>>>>  
>>>>>      
>>>>>           
>>>>>> error
>>>>>> /
>>>>>>
>>>>>> /
>>>>>> Thanks
>>>>>> /
>>>>>>
>>>>>>
>>>>>>
>>>>>>            
>>>>>>             
>>>>>  
>>>>>       
>>>>>           
>>     
>
>   


RE: HTTP Error

Posted by Devaraj K <de...@huawei.com>.
Hi Adarsh,

    Which version of hadoop are you using? 

If you are using 0.21 and later versions, need to set the environment
variables HADOOP_COMMON_HOME, HADOOP_HDFS_HOME, HADOOP_MAPREDUCE_HOME
correctly. Otherwise this problem comes.

If you are using 0.20.* version, this problem comes when the compiled jsp
files are not coming into the java classpath.



Devaraj K 

-----Original Message-----
From: Adarsh Sharma [mailto:adarsh.sharma@orkash.com] 
Sent: Thursday, July 14, 2011 6:32 PM
To: common-user@hadoop.apache.org
Subject: Re: HTTP Error

Any update on the HTTP Error : Still the issue remains but Hadoop is 
functioning properly.


Thanks


Adarsh Sharma wrote:
> Thanks Joey I solved the problem of Safe mode by manually deleting 
> some files ,
>
> bin/hadoop dfsadmin -report   , shows the all 2 nodes and safe mode 
> gets OFF after some time. But,
>
> but I have no guess to solve the below error :
>
> WHy my web UI shows :
>
>>>>  HTTP ERROR: 404
>>>>
>>>> /dfshealth.jsp
>>>>
>>>> RequestURI=/dfshealth.jsp
>>>>
>>>> /Powered by Jetty:// <http://jetty.mortbay.org/>
>>>> /
>
>
>
> Any views on it. Please help
>
> Thanks
>
>
>
>
> Joey Echeverria wrote:
>> It looks like both datanodes are trying to serve data out of the smae 
>> directory. Is there any chance that both datanodes are using the same 
>> NFS mount for the dfs.data.dir?
>>
>> If not, what I would do is delete the data from ${dfs.data.dir} and 
>> then re-format the namenode. You'll lose all of your data, hopefully 
>> that's not a problem at this time.
>> -Joey
>>
>>
>> On Jul 8, 2011, at 0:40, Adarsh Sharma <ad...@orkash.com> wrote:
>>
>>  
>>> Thanks , Still don't understand the issue.
>>>
>>> My name node has repeatedly show these logs :
>>>
>>> 2011-07-08 09:36:31,365 INFO 
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: 
>>> ugi=hadoop,hadoop    ip=/MAster-IP   cmd=listStatus    
>>> src=/home/hadoop/system    dst=null    perm=null
>>> 2011-07-08 09:36:31,367 INFO org.apache.hadoop.ipc.Server: IPC 
>>> Server handler 2 on 9000, call delete(/home/hadoop/system, true) 
>>> from Master-IP:53593: error: 
>>> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot 
>>> delete /home/hadoop/system. Name node is in safe mode.
>>> The ratio of reported blocks 0.8293 has not reached the threshold 
>>> 0.9990. Safe mode will be turned off automatically.
>>> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot 
>>> delete /home/hadoop/system. Name node is in safe mode.
>>> The ratio of reported blocks 0.8293 has not reached the threshold 
>>> 0.9990. Safe mode will be turned off automatically.
>>>   at 
>>>
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesys
tem.java:1700) 
>>>
>>>   at 
>>>
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java
:1680) 
>>>
>>>   at 
>>>
org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:517) 
>>>
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>   at 
>>>
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
) 
>>>
>>>   at 
>>>
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
.java:25) 
>>>
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>>>
>>>
>>> And one of my data node shows the below logs :
>>>
>>> 2011-07-08 09:49:56,967 INFO 
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand 
>>> action: DNA_REGISTER
>>> 2011-07-08 09:49:59,962 WARN 
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is 
>>> shutting down: org.apache.hadoop.ipc.RemoteException: 
>>> org.apache.hadoop.hdfs.protocol.UnregisteredDatanodeException: Data 
>>> node 192.168.0.209:50010 is attempting to report storage ID 
>>> DS-218695497-SLave_IP-50010-1303978807280. Node SLave_IP:50010 is 
>>> expected to serve this storage.
>>>       at 
>>>
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDatanode(FSNamesystem
.java:3920) 
>>>
>>>       at 
>>>
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.processReport(FSNamesyst
em.java:2891) 
>>>
>>>       at 
>>>
org.apache.hadoop.hdfs.server.namenode.NameNode.blockReport(NameNode.java:71
5) 
>>>
>>>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>       at 
>>>
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
) 
>>>
>>>       at 
>>>
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
.java:25) 
>>>
>>>       at java.lang.reflect.Method.invoke(Method.java:597)
>>>       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>>>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>>>       at java.security.AccessController.doPrivileged(Native Method)
>>>       at javax.security.auth.Subject.doAs(Subject.java:396)
>>>       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>>>
>>>       at org.apache.hadoop.ipc.Client.call(Client.java:740)
>>>       at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>>>       at $Proxy4.blockReport(Unknown Source)
>>>       at 
>>>
org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:7
56) 
>>>
>>>       at 
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1186)
>>>       at java.lang.Thread.run(Thread.java:619)
>>>
>>> 2011-07-08 09:50:00,072 INFO org.apache.hadoop.ipc.Server: Stopping 
>>> server on 50020
>>> 2011-07-08 09:50:00,072 INFO org.apache.hadoop.ipc.Server: IPC 
>>> Server handler 1 on 50020: exiting
>>> 2011-07-08 09:50:00,074 INFO org.apache.hadoop.ipc.Server: IPC 
>>> Server handler 2 on 50020: exiting
>>> 2011-07-08 09:50:00,074 INFO org.apache.hadoop.ipc.Server: IPC 
>>> Server handler 0 on 50020: exiting
>>> 2011-07-08 09:50:00,076 INFO org.apache.hadoop.ipc.Server: Stopping 
>>> IPC Server listener on 50020
>>> 2011-07-08 09:50:00,077 INFO org.apache.hadoop.ipc.Server: Stopping 
>>> IPC Server Responder
>>> 2011-07-08 09:50:00,077 INFO 
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for 
>>> threadgroup to exit, active threads is 1
>>> 2011-07-08 09:50:00,078 WARN 
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: 
>>> DatanodeRegistration(SLave_IP:50010, 
>>> storageID=DS-218695497-192.168.0.209-50010-1303978807280, 
>>> infoPort=50075, ipcPort=50020):DataXceiveServer: 
>>> java.nio.channels.AsynchronousCloseException
>>>       at 
>>>
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptible
Channel.java:185) 
>>>
>>>       at 
>>>
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152) 
>>>
>>>       at 
>>> sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
>>>       at 
>>>
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServ
er.java:130) 
>>>
>>>       at java.lang.Thread.run(Thread.java:619)
>>>
>>> 2011-07-08 09:50:00,394 INFO 
>>> org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Exiting 
>>> DataBlockScanner thread.
>>> 2011-07-08 09:50:01,079 INFO 
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for 
>>> threadgroup to exit, active threads is 0
>>> 2011-07-08 09:50:01,183 INFO 
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: 
>>> DatanodeRegistration(192.168.0.209:50010, 
>>> storageID=DS-218695497-192.168.0.209-50010-1303978807280, 
>>> infoPort=50075, ipcPort=50020):Finishing DataNode in: 
>>> FSDataset{dirpath='/hdd1-1/data/current'}
>>> 2011-07-08 09:50:01,183 INFO org.apache.hadoop.ipc.Server: Stopping 
>>> server on 50020
>>> 2011-07-08 09:50:01,183 INFO 
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for 
>>> threadgroup to exit, active threads is 0
>>> 2011-07-08 09:50:01,185 INFO 
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>>> /************************************************************
>>> SHUTDOWN_MSG: Shutting down DataNode at ws14-suru-lin/
>>>
>>> Also my dfsdmin report shows :
>>>
>>> bash-3.2$ bin/hadoop dfsadmin -report
>>> Safe mode is ON
>>> Configured Capacity: 59069984768 (55.01 GB)
>>> Present Capacity: 46471880704 (43.28 GB)
>>> DFS Remaining: 45169745920 (42.07 GB)
>>> DFS Used: 1302134784 (1.21 GB)
>>> DFS Used%: 2.8%
>>> Under replicated blocks: 0
>>> Blocks with corrupt replicas: 0
>>> Missing blocks: 0
>>>
>>> -------------------------------------------------
>>> Datanodes available: 1 (1 total, 0 dead)
>>>
>>> Name: IP:50010
>>> Decommission Status : Normal
>>> Configured Capacity: 59069984768 (55.01 GB)
>>> DFS Used: 1302134784 (1.21 GB)
>>> Non DFS Used: 12598104064 (11.73 GB)
>>> DFS Remaining: 45169745920(42.07 GB)
>>> DFS Used%: 2.2%
>>> DFS Remaining%: 76.47%
>>> Last contact: Fri Jul 08 10:03:40 IST 2011
>>>
>>> But I have 2 datanodes.Safe mode is on from the last 1 hour. I know 
>>> the command to leave it manually.
>>> I think the problem arises due to non start up of one of my 
>>> datanodes. How could i solve this problem .
>>>
>>> Also for
>>>
>>> HTTP ERROR: 404
>>>
>>> /dfshealth.jsp
>>>
>>> RequestURI=/dfshealth.jsp
>>>
>>> /Powered by Jetty:// <http://jetty.mortbay.org/> error,
>>>
>>> I manually check through below command at all nodes On Master :
>>>
>>> ash-3.2$ /usr/java/jdk1.6.0_18/bin/jps 7548 SecondaryNameNode
>>> 7395 NameNode
>>> 7628 JobTracker
>>> 7713 Jps
>>>
>>> And also on slaves :
>>>
>>> [root@ws33-shiv-lin ~]# /usr/java/jdk1.6.0_20/bin/jps 5696 DataNode
>>> 5941 Jps
>>> 5818 TaskTracker
>>>
>>>
>>>
>>>
>>> Thanks
>>>
>>>
>>>
>>> Jeff.Schmitz@shell.com wrote:
>>>    
>>>> Adarsh,
>>>>
>>>> You could also run from command line
>>>>
>>>> [root@xxxxxxx bin]# ./hadoop dfsadmin -report
>>>> Configured Capacity: 1151948095488 (1.05 TB)
>>>> Present Capacity: 1059350446080 (986.6 GB)
>>>> DFS Remaining: 1056175992832 (983.64 GB)
>>>> DFS Used: 3174453248 (2.96 GB)
>>>> DFS Used%: 0.3%
>>>> Under replicated blocks: 0
>>>> Blocks with corrupt replicas: 0
>>>> Missing blocks: 0
>>>>
>>>> -------------------------------------------------
>>>> Datanodes available: 5 (5 total, 0 dead)
>>>>
>>>>
>>>>
>>>>
>>>> -----Original Message-----
>>>> From: dhruv21@gmail.com [mailto:dhruv21@gmail.com] On Behalf Of Dhruv
>>>> Kumar
>>>> Sent: Thursday, July 07, 2011 10:01 AM
>>>> To: common-user@hadoop.apache.org
>>>> Subject: Re: HTTP Error
>>>>
>>>> 1) Check with jps to see if all services are functioning.
>>>>
>>>> 2) Have you tried appending dfshealth.jsp at the end of the URL as the
>>>> 404
>>>> says?
>>>>
>>>> Try using this:
>>>> http://localhost:50070/dfshealth.jsp
>>>>
>>>>
>>>>
>>>> On Thu, Jul 7, 2011 at 7:13 AM, Adarsh Sharma
>>>> <ad...@orkash.com>wrote:
>>>>
>>>>  
>>>>      
>>>>> Dear all,
>>>>>
>>>>> Today I am stucked with the strange problem in the running hadoop
>>>>>            
>>>> cluster.
>>>>  
>>>>      
>>>>> After starting hadoop by bin/start-all.sh, all nodes are started. But
>>>>>            
>>>> when
>>>>  
>>>>      
>>>>> I check through web UI ( MAster-Ip:50070), It shows :
>>>>>
>>>>>
>>>>>  HTTP ERROR: 404
>>>>>
>>>>> /dfshealth.jsp
>>>>>
>>>>> RequestURI=/dfshealth.jsp
>>>>>
>>>>> /Powered by Jetty:// <http://jetty.mortbay.org/>
>>>>> /
>>>>>
>>>>> /I check by command line that hadoop cannot able to get out of safe
>>>>>            
>>>> mode.
>>>>  
>>>>      
>>>>> /
>>>>>
>>>>> /I know , manually command to leave safe mode
>>>>> /
>>>>>
>>>>> /bin/hadoop dfsadmin -safemode leave
>>>>> /
>>>>>
>>>>> /But How can I make hadoop  run properly and what are the reasons of
>>>>>            
>>>> this
>>>>  
>>>>      
>>>>> error
>>>>> /
>>>>>
>>>>> /
>>>>> Thanks
>>>>> /
>>>>>
>>>>>
>>>>>
>>>>>            
>>>>  
>>>>       
>
>


Re: HTTP Error

Posted by Adarsh Sharma <ad...@orkash.com>.
Any update on the HTTP Error : Still the issue remains but Hadoop is 
functioning properly.


Thanks


Adarsh Sharma wrote:
> Thanks Joey I solved the problem of Safe mode by manually deleting 
> some files ,
>
> bin/hadoop dfsadmin -report   , shows the all 2 nodes and safe mode 
> gets OFF after some time. But,
>
> but I have no guess to solve the below error :
>
> WHy my web UI shows :
>
>>>>  HTTP ERROR: 404
>>>>
>>>> /dfshealth.jsp
>>>>
>>>> RequestURI=/dfshealth.jsp
>>>>
>>>> /Powered by Jetty:// <http://jetty.mortbay.org/>
>>>> /
>
>
>
> Any views on it. Please help
>
> Thanks
>
>
>
>
> Joey Echeverria wrote:
>> It looks like both datanodes are trying to serve data out of the smae 
>> directory. Is there any chance that both datanodes are using the same 
>> NFS mount for the dfs.data.dir?
>>
>> If not, what I would do is delete the data from ${dfs.data.dir} and 
>> then re-format the namenode. You'll lose all of your data, hopefully 
>> that's not a problem at this time.
>> -Joey
>>
>>
>> On Jul 8, 2011, at 0:40, Adarsh Sharma <ad...@orkash.com> wrote:
>>
>>  
>>> Thanks , Still don't understand the issue.
>>>
>>> My name node has repeatedly show these logs :
>>>
>>> 2011-07-08 09:36:31,365 INFO 
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: 
>>> ugi=hadoop,hadoop    ip=/MAster-IP   cmd=listStatus    
>>> src=/home/hadoop/system    dst=null    perm=null
>>> 2011-07-08 09:36:31,367 INFO org.apache.hadoop.ipc.Server: IPC 
>>> Server handler 2 on 9000, call delete(/home/hadoop/system, true) 
>>> from Master-IP:53593: error: 
>>> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot 
>>> delete /home/hadoop/system. Name node is in safe mode.
>>> The ratio of reported blocks 0.8293 has not reached the threshold 
>>> 0.9990. Safe mode will be turned off automatically.
>>> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot 
>>> delete /home/hadoop/system. Name node is in safe mode.
>>> The ratio of reported blocks 0.8293 has not reached the threshold 
>>> 0.9990. Safe mode will be turned off automatically.
>>>   at 
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1700) 
>>>
>>>   at 
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1680) 
>>>
>>>   at 
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:517) 
>>>
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>   at 
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
>>>
>>>   at 
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 
>>>
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>>>
>>>
>>> And one of my data node shows the below logs :
>>>
>>> 2011-07-08 09:49:56,967 INFO 
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand 
>>> action: DNA_REGISTER
>>> 2011-07-08 09:49:59,962 WARN 
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is 
>>> shutting down: org.apache.hadoop.ipc.RemoteException: 
>>> org.apache.hadoop.hdfs.protocol.UnregisteredDatanodeException: Data 
>>> node 192.168.0.209:50010 is attempting to report storage ID 
>>> DS-218695497-SLave_IP-50010-1303978807280. Node SLave_IP:50010 is 
>>> expected to serve this storage.
>>>       at 
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDatanode(FSNamesystem.java:3920) 
>>>
>>>       at 
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.processReport(FSNamesystem.java:2891) 
>>>
>>>       at 
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.blockReport(NameNode.java:715) 
>>>
>>>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>       at 
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
>>>
>>>       at 
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 
>>>
>>>       at java.lang.reflect.Method.invoke(Method.java:597)
>>>       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>>>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>>>       at java.security.AccessController.doPrivileged(Native Method)
>>>       at javax.security.auth.Subject.doAs(Subject.java:396)
>>>       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>>>
>>>       at org.apache.hadoop.ipc.Client.call(Client.java:740)
>>>       at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>>>       at $Proxy4.blockReport(Unknown Source)
>>>       at 
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:756) 
>>>
>>>       at 
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1186)
>>>       at java.lang.Thread.run(Thread.java:619)
>>>
>>> 2011-07-08 09:50:00,072 INFO org.apache.hadoop.ipc.Server: Stopping 
>>> server on 50020
>>> 2011-07-08 09:50:00,072 INFO org.apache.hadoop.ipc.Server: IPC 
>>> Server handler 1 on 50020: exiting
>>> 2011-07-08 09:50:00,074 INFO org.apache.hadoop.ipc.Server: IPC 
>>> Server handler 2 on 50020: exiting
>>> 2011-07-08 09:50:00,074 INFO org.apache.hadoop.ipc.Server: IPC 
>>> Server handler 0 on 50020: exiting
>>> 2011-07-08 09:50:00,076 INFO org.apache.hadoop.ipc.Server: Stopping 
>>> IPC Server listener on 50020
>>> 2011-07-08 09:50:00,077 INFO org.apache.hadoop.ipc.Server: Stopping 
>>> IPC Server Responder
>>> 2011-07-08 09:50:00,077 INFO 
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for 
>>> threadgroup to exit, active threads is 1
>>> 2011-07-08 09:50:00,078 WARN 
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: 
>>> DatanodeRegistration(SLave_IP:50010, 
>>> storageID=DS-218695497-192.168.0.209-50010-1303978807280, 
>>> infoPort=50075, ipcPort=50020):DataXceiveServer: 
>>> java.nio.channels.AsynchronousCloseException
>>>       at 
>>> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185) 
>>>
>>>       at 
>>> sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152) 
>>>
>>>       at 
>>> sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
>>>       at 
>>> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130) 
>>>
>>>       at java.lang.Thread.run(Thread.java:619)
>>>
>>> 2011-07-08 09:50:00,394 INFO 
>>> org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Exiting 
>>> DataBlockScanner thread.
>>> 2011-07-08 09:50:01,079 INFO 
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for 
>>> threadgroup to exit, active threads is 0
>>> 2011-07-08 09:50:01,183 INFO 
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: 
>>> DatanodeRegistration(192.168.0.209:50010, 
>>> storageID=DS-218695497-192.168.0.209-50010-1303978807280, 
>>> infoPort=50075, ipcPort=50020):Finishing DataNode in: 
>>> FSDataset{dirpath='/hdd1-1/data/current'}
>>> 2011-07-08 09:50:01,183 INFO org.apache.hadoop.ipc.Server: Stopping 
>>> server on 50020
>>> 2011-07-08 09:50:01,183 INFO 
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for 
>>> threadgroup to exit, active threads is 0
>>> 2011-07-08 09:50:01,185 INFO 
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>>> /************************************************************
>>> SHUTDOWN_MSG: Shutting down DataNode at ws14-suru-lin/
>>>
>>> Also my dfsdmin report shows :
>>>
>>> bash-3.2$ bin/hadoop dfsadmin -report
>>> Safe mode is ON
>>> Configured Capacity: 59069984768 (55.01 GB)
>>> Present Capacity: 46471880704 (43.28 GB)
>>> DFS Remaining: 45169745920 (42.07 GB)
>>> DFS Used: 1302134784 (1.21 GB)
>>> DFS Used%: 2.8%
>>> Under replicated blocks: 0
>>> Blocks with corrupt replicas: 0
>>> Missing blocks: 0
>>>
>>> -------------------------------------------------
>>> Datanodes available: 1 (1 total, 0 dead)
>>>
>>> Name: IP:50010
>>> Decommission Status : Normal
>>> Configured Capacity: 59069984768 (55.01 GB)
>>> DFS Used: 1302134784 (1.21 GB)
>>> Non DFS Used: 12598104064 (11.73 GB)
>>> DFS Remaining: 45169745920(42.07 GB)
>>> DFS Used%: 2.2%
>>> DFS Remaining%: 76.47%
>>> Last contact: Fri Jul 08 10:03:40 IST 2011
>>>
>>> But I have 2 datanodes.Safe mode is on from the last 1 hour. I know 
>>> the command to leave it manually.
>>> I think the problem arises due to non start up of one of my 
>>> datanodes. How could i solve this problem .
>>>
>>> Also for
>>>
>>> HTTP ERROR: 404
>>>
>>> /dfshealth.jsp
>>>
>>> RequestURI=/dfshealth.jsp
>>>
>>> /Powered by Jetty:// <http://jetty.mortbay.org/> error,
>>>
>>> I manually check through below command at all nodes On Master :
>>>
>>> ash-3.2$ /usr/java/jdk1.6.0_18/bin/jps 7548 SecondaryNameNode
>>> 7395 NameNode
>>> 7628 JobTracker
>>> 7713 Jps
>>>
>>> And also on slaves :
>>>
>>> [root@ws33-shiv-lin ~]# /usr/java/jdk1.6.0_20/bin/jps 5696 DataNode
>>> 5941 Jps
>>> 5818 TaskTracker
>>>
>>>
>>>
>>>
>>> Thanks
>>>
>>>
>>>
>>> Jeff.Schmitz@shell.com wrote:
>>>    
>>>> Adarsh,
>>>>
>>>> You could also run from command line
>>>>
>>>> [root@xxxxxxx bin]# ./hadoop dfsadmin -report
>>>> Configured Capacity: 1151948095488 (1.05 TB)
>>>> Present Capacity: 1059350446080 (986.6 GB)
>>>> DFS Remaining: 1056175992832 (983.64 GB)
>>>> DFS Used: 3174453248 (2.96 GB)
>>>> DFS Used%: 0.3%
>>>> Under replicated blocks: 0
>>>> Blocks with corrupt replicas: 0
>>>> Missing blocks: 0
>>>>
>>>> -------------------------------------------------
>>>> Datanodes available: 5 (5 total, 0 dead)
>>>>
>>>>
>>>>
>>>>
>>>> -----Original Message-----
>>>> From: dhruv21@gmail.com [mailto:dhruv21@gmail.com] On Behalf Of Dhruv
>>>> Kumar
>>>> Sent: Thursday, July 07, 2011 10:01 AM
>>>> To: common-user@hadoop.apache.org
>>>> Subject: Re: HTTP Error
>>>>
>>>> 1) Check with jps to see if all services are functioning.
>>>>
>>>> 2) Have you tried appending dfshealth.jsp at the end of the URL as the
>>>> 404
>>>> says?
>>>>
>>>> Try using this:
>>>> http://localhost:50070/dfshealth.jsp
>>>>
>>>>
>>>>
>>>> On Thu, Jul 7, 2011 at 7:13 AM, Adarsh Sharma
>>>> <ad...@orkash.com>wrote:
>>>>
>>>>  
>>>>      
>>>>> Dear all,
>>>>>
>>>>> Today I am stucked with the strange problem in the running hadoop
>>>>>            
>>>> cluster.
>>>>  
>>>>      
>>>>> After starting hadoop by bin/start-all.sh, all nodes are started. But
>>>>>            
>>>> when
>>>>  
>>>>      
>>>>> I check through web UI ( MAster-Ip:50070), It shows :
>>>>>
>>>>>
>>>>>  HTTP ERROR: 404
>>>>>
>>>>> /dfshealth.jsp
>>>>>
>>>>> RequestURI=/dfshealth.jsp
>>>>>
>>>>> /Powered by Jetty:// <http://jetty.mortbay.org/>
>>>>> /
>>>>>
>>>>> /I check by command line that hadoop cannot able to get out of safe
>>>>>            
>>>> mode.
>>>>  
>>>>      
>>>>> /
>>>>>
>>>>> /I know , manually command to leave safe mode
>>>>> /
>>>>>
>>>>> /bin/hadoop dfsadmin -safemode leave
>>>>> /
>>>>>
>>>>> /But How can I make hadoop  run properly and what are the reasons of
>>>>>            
>>>> this
>>>>  
>>>>      
>>>>> error
>>>>> /
>>>>>
>>>>> /
>>>>> Thanks
>>>>> /
>>>>>
>>>>>
>>>>>
>>>>>            
>>>>  
>>>>       
>
>


Re: HTTP Error

Posted by Adarsh Sharma <ad...@orkash.com>.
Thanks Joey I solved the problem of Safe mode by manually deleting some 
files ,

bin/hadoop dfsadmin -report   , shows the all 2 nodes and safe mode gets 
OFF after some time. But,

 but I have no guess to solve the below error :

WHy my web UI shows :

>>>  HTTP ERROR: 404
>>> 
>>> /dfshealth.jsp
>>> 
>>> RequestURI=/dfshealth.jsp
>>> 
>>> /Powered by Jetty:// <http://jetty.mortbay.org/>
>>> /



Any views on it. Please help

Thanks




Joey Echeverria wrote:
> It looks like both datanodes are trying to serve data out of the smae directory. Is there any chance that both datanodes are using the same NFS mount for the dfs.data.dir?
>
> If not, what I would do is delete the data from ${dfs.data.dir} and then re-format the namenode. You'll lose all of your data, hopefully that's not a problem at this time. 
>
> -Joey 
>
>
>
> On Jul 8, 2011, at 0:40, Adarsh Sharma <ad...@orkash.com> wrote:
>
>   
>> Thanks , Still don't understand the issue.
>>
>> My name node has repeatedly show these logs :
>>
>> 2011-07-08 09:36:31,365 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop,hadoop    ip=/MAster-IP   cmd=listStatus    src=/home/hadoop/system    dst=null    perm=null
>> 2011-07-08 09:36:31,367 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9000, call delete(/home/hadoop/system, true) from Master-IP:53593: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /home/hadoop/system. Name node is in safe mode.
>> The ratio of reported blocks 0.8293 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
>> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /home/hadoop/system. Name node is in safe mode.
>> The ratio of reported blocks 0.8293 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1700)
>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1680)
>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:517)
>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>>   at java.security.AccessController.doPrivileged(Native Method)
>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>>
>>
>> And one of my data node shows the below logs :
>>
>> 2011-07-08 09:49:56,967 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand action: DNA_REGISTER
>> 2011-07-08 09:49:59,962 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is shutting down: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.protocol.UnregisteredDatanodeException: Data node 192.168.0.209:50010 is attempting to report storage ID DS-218695497-SLave_IP-50010-1303978807280. Node SLave_IP:50010 is expected to serve this storage.
>>       at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDatanode(FSNamesystem.java:3920)
>>       at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.processReport(FSNamesystem.java:2891)
>>       at org.apache.hadoop.hdfs.server.namenode.NameNode.blockReport(NameNode.java:715)
>>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>       at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>       at java.lang.reflect.Method.invoke(Method.java:597)
>>       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>>       at java.security.AccessController.doPrivileged(Native Method)
>>       at javax.security.auth.Subject.doAs(Subject.java:396)
>>       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>>
>>       at org.apache.hadoop.ipc.Client.call(Client.java:740)
>>       at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>>       at $Proxy4.blockReport(Unknown Source)
>>       at org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:756)
>>       at org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1186)
>>       at java.lang.Thread.run(Thread.java:619)
>>
>> 2011-07-08 09:50:00,072 INFO org.apache.hadoop.ipc.Server: Stopping server on 50020
>> 2011-07-08 09:50:00,072 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50020: exiting
>> 2011-07-08 09:50:00,074 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020: exiting
>> 2011-07-08 09:50:00,074 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 50020: exiting
>> 2011-07-08 09:50:00,076 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 50020
>> 2011-07-08 09:50:00,077 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
>> 2011-07-08 09:50:00,077 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
>> 2011-07-08 09:50:00,078 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(SLave_IP:50010, storageID=DS-218695497-192.168.0.209-50010-1303978807280, infoPort=50075, ipcPort=50020):DataXceiveServer: java.nio.channels.AsynchronousCloseException
>>       at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
>>       at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
>>       at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
>>       at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
>>       at java.lang.Thread.run(Thread.java:619)
>>
>> 2011-07-08 09:50:00,394 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Exiting DataBlockScanner thread.
>> 2011-07-08 09:50:01,079 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
>> 2011-07-08 09:50:01,183 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.0.209:50010, storageID=DS-218695497-192.168.0.209-50010-1303978807280, infoPort=50075, ipcPort=50020):Finishing DataNode in: FSDataset{dirpath='/hdd1-1/data/current'}
>> 2011-07-08 09:50:01,183 INFO org.apache.hadoop.ipc.Server: Stopping server on 50020
>> 2011-07-08 09:50:01,183 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
>> 2011-07-08 09:50:01,185 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down DataNode at ws14-suru-lin/
>>
>> Also my dfsdmin report shows :
>>
>> bash-3.2$ bin/hadoop dfsadmin -report
>> Safe mode is ON
>> Configured Capacity: 59069984768 (55.01 GB)
>> Present Capacity: 46471880704 (43.28 GB)
>> DFS Remaining: 45169745920 (42.07 GB)
>> DFS Used: 1302134784 (1.21 GB)
>> DFS Used%: 2.8%
>> Under replicated blocks: 0
>> Blocks with corrupt replicas: 0
>> Missing blocks: 0
>>
>> -------------------------------------------------
>> Datanodes available: 1 (1 total, 0 dead)
>>
>> Name: IP:50010
>> Decommission Status : Normal
>> Configured Capacity: 59069984768 (55.01 GB)
>> DFS Used: 1302134784 (1.21 GB)
>> Non DFS Used: 12598104064 (11.73 GB)
>> DFS Remaining: 45169745920(42.07 GB)
>> DFS Used%: 2.2%
>> DFS Remaining%: 76.47%
>> Last contact: Fri Jul 08 10:03:40 IST 2011
>>
>> But I have 2 datanodes.Safe mode is on from the last 1 hour. I know the command to leave it manually.
>> I think the problem arises due to non start up of one of my datanodes. How could i solve this problem .
>>
>> Also for
>>
>> HTTP ERROR: 404
>>
>> /dfshealth.jsp
>>
>> RequestURI=/dfshealth.jsp
>>
>> /Powered by Jetty:// <http://jetty.mortbay.org/> error,
>>
>> I manually check through below command at all nodes 
>> On Master :
>>
>> ash-3.2$ /usr/java/jdk1.6.0_18/bin/jps 7548 SecondaryNameNode
>> 7395 NameNode
>> 7628 JobTracker
>> 7713 Jps
>>
>> And also on slaves :
>>
>> [root@ws33-shiv-lin ~]# /usr/java/jdk1.6.0_20/bin/jps 5696 DataNode
>> 5941 Jps
>> 5818 TaskTracker
>>
>>
>>
>>
>> Thanks
>>
>>
>>
>> Jeff.Schmitz@shell.com wrote:
>>     
>>> Adarsh,
>>>
>>> You could also run from command line
>>>
>>> [root@xxxxxxx bin]# ./hadoop dfsadmin -report
>>> Configured Capacity: 1151948095488 (1.05 TB)
>>> Present Capacity: 1059350446080 (986.6 GB)
>>> DFS Remaining: 1056175992832 (983.64 GB)
>>> DFS Used: 3174453248 (2.96 GB)
>>> DFS Used%: 0.3%
>>> Under replicated blocks: 0
>>> Blocks with corrupt replicas: 0
>>> Missing blocks: 0
>>>
>>> -------------------------------------------------
>>> Datanodes available: 5 (5 total, 0 dead)
>>>
>>>
>>>
>>>
>>> -----Original Message-----
>>> From: dhruv21@gmail.com [mailto:dhruv21@gmail.com] On Behalf Of Dhruv
>>> Kumar
>>> Sent: Thursday, July 07, 2011 10:01 AM
>>> To: common-user@hadoop.apache.org
>>> Subject: Re: HTTP Error
>>>
>>> 1) Check with jps to see if all services are functioning.
>>>
>>> 2) Have you tried appending dfshealth.jsp at the end of the URL as the
>>> 404
>>> says?
>>>
>>> Try using this:
>>> http://localhost:50070/dfshealth.jsp
>>>
>>>
>>>
>>> On Thu, Jul 7, 2011 at 7:13 AM, Adarsh Sharma
>>> <ad...@orkash.com>wrote:
>>>
>>>  
>>>       
>>>> Dear all,
>>>>
>>>> Today I am stucked with the strange problem in the running hadoop
>>>>    
>>>>         
>>> cluster.
>>>  
>>>       
>>>> After starting hadoop by bin/start-all.sh, all nodes are started. But
>>>>    
>>>>         
>>> when
>>>  
>>>       
>>>> I check through web UI ( MAster-Ip:50070), It shows :
>>>>
>>>>
>>>>  HTTP ERROR: 404
>>>>
>>>> /dfshealth.jsp
>>>>
>>>> RequestURI=/dfshealth.jsp
>>>>
>>>> /Powered by Jetty:// <http://jetty.mortbay.org/>
>>>> /
>>>>
>>>> /I check by command line that hadoop cannot able to get out of safe
>>>>    
>>>>         
>>> mode.
>>>  
>>>       
>>>> /
>>>>
>>>> /I know , manually command to leave safe mode
>>>> /
>>>>
>>>> /bin/hadoop dfsadmin -safemode leave
>>>> /
>>>>
>>>> /But How can I make hadoop  run properly and what are the reasons of
>>>>    
>>>>         
>>> this
>>>  
>>>       
>>>> error
>>>> /
>>>>
>>>> /
>>>> Thanks
>>>> /
>>>>
>>>>
>>>>
>>>>    
>>>>         
>>>  
>>>       


Re: HTTP Error

Posted by Joey Echeverria <jo...@cloudera.com>.
It looks like both datanodes are trying to serve data out of the smae directory. Is there any chance that both datanodes are using the same NFS mount for the dfs.data.dir?

If not, what I would do is delete the data from ${dfs.data.dir} and then re-format the namenode. You'll lose all of your data, hopefully that's not a problem at this time. 

-Joey 



On Jul 8, 2011, at 0:40, Adarsh Sharma <ad...@orkash.com> wrote:

> Thanks , Still don't understand the issue.
> 
> My name node has repeatedly show these logs :
> 
> 2011-07-08 09:36:31,365 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop,hadoop    ip=/MAster-IP   cmd=listStatus    src=/home/hadoop/system    dst=null    perm=null
> 2011-07-08 09:36:31,367 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9000, call delete(/home/hadoop/system, true) from Master-IP:53593: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /home/hadoop/system. Name node is in safe mode.
> The ratio of reported blocks 0.8293 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /home/hadoop/system. Name node is in safe mode.
> The ratio of reported blocks 0.8293 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1700)
>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1680)
>   at org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:517)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> 
> 
> And one of my data node shows the below logs :
> 
> 2011-07-08 09:49:56,967 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand action: DNA_REGISTER
> 2011-07-08 09:49:59,962 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is shutting down: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.protocol.UnregisteredDatanodeException: Data node 192.168.0.209:50010 is attempting to report storage ID DS-218695497-SLave_IP-50010-1303978807280. Node SLave_IP:50010 is expected to serve this storage.
>       at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDatanode(FSNamesystem.java:3920)
>       at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.processReport(FSNamesystem.java:2891)
>       at org.apache.hadoop.hdfs.server.namenode.NameNode.blockReport(NameNode.java:715)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>       at java.lang.reflect.Method.invoke(Method.java:597)
>       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:396)
>       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> 
>       at org.apache.hadoop.ipc.Client.call(Client.java:740)
>       at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>       at $Proxy4.blockReport(Unknown Source)
>       at org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:756)
>       at org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1186)
>       at java.lang.Thread.run(Thread.java:619)
> 
> 2011-07-08 09:50:00,072 INFO org.apache.hadoop.ipc.Server: Stopping server on 50020
> 2011-07-08 09:50:00,072 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50020: exiting
> 2011-07-08 09:50:00,074 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020: exiting
> 2011-07-08 09:50:00,074 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 50020: exiting
> 2011-07-08 09:50:00,076 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 50020
> 2011-07-08 09:50:00,077 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
> 2011-07-08 09:50:00,077 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
> 2011-07-08 09:50:00,078 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(SLave_IP:50010, storageID=DS-218695497-192.168.0.209-50010-1303978807280, infoPort=50075, ipcPort=50020):DataXceiveServer: java.nio.channels.AsynchronousCloseException
>       at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
>       at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
>       at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
>       at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
>       at java.lang.Thread.run(Thread.java:619)
> 
> 2011-07-08 09:50:00,394 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Exiting DataBlockScanner thread.
> 2011-07-08 09:50:01,079 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
> 2011-07-08 09:50:01,183 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.0.209:50010, storageID=DS-218695497-192.168.0.209-50010-1303978807280, infoPort=50075, ipcPort=50020):Finishing DataNode in: FSDataset{dirpath='/hdd1-1/data/current'}
> 2011-07-08 09:50:01,183 INFO org.apache.hadoop.ipc.Server: Stopping server on 50020
> 2011-07-08 09:50:01,183 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
> 2011-07-08 09:50:01,185 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at ws14-suru-lin/
> 
> Also my dfsdmin report shows :
> 
> bash-3.2$ bin/hadoop dfsadmin -report
> Safe mode is ON
> Configured Capacity: 59069984768 (55.01 GB)
> Present Capacity: 46471880704 (43.28 GB)
> DFS Remaining: 45169745920 (42.07 GB)
> DFS Used: 1302134784 (1.21 GB)
> DFS Used%: 2.8%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> 
> -------------------------------------------------
> Datanodes available: 1 (1 total, 0 dead)
> 
> Name: IP:50010
> Decommission Status : Normal
> Configured Capacity: 59069984768 (55.01 GB)
> DFS Used: 1302134784 (1.21 GB)
> Non DFS Used: 12598104064 (11.73 GB)
> DFS Remaining: 45169745920(42.07 GB)
> DFS Used%: 2.2%
> DFS Remaining%: 76.47%
> Last contact: Fri Jul 08 10:03:40 IST 2011
> 
> But I have 2 datanodes.Safe mode is on from the last 1 hour. I know the command to leave it manually.
> I think the problem arises due to non start up of one of my datanodes. How could i solve this problem .
> 
> Also for
> 
> HTTP ERROR: 404
> 
> /dfshealth.jsp
> 
> RequestURI=/dfshealth.jsp
> 
> /Powered by Jetty:// <http://jetty.mortbay.org/> error,
> 
> I manually check through below command at all nodes 
> On Master :
> 
> ash-3.2$ /usr/java/jdk1.6.0_18/bin/jps 7548 SecondaryNameNode
> 7395 NameNode
> 7628 JobTracker
> 7713 Jps
> 
> And also on slaves :
> 
> [root@ws33-shiv-lin ~]# /usr/java/jdk1.6.0_20/bin/jps 5696 DataNode
> 5941 Jps
> 5818 TaskTracker
> 
> 
> 
> 
> Thanks
> 
> 
> 
> Jeff.Schmitz@shell.com wrote:
>> Adarsh,
>> 
>> You could also run from command line
>> 
>> [root@xxxxxxx bin]# ./hadoop dfsadmin -report
>> Configured Capacity: 1151948095488 (1.05 TB)
>> Present Capacity: 1059350446080 (986.6 GB)
>> DFS Remaining: 1056175992832 (983.64 GB)
>> DFS Used: 3174453248 (2.96 GB)
>> DFS Used%: 0.3%
>> Under replicated blocks: 0
>> Blocks with corrupt replicas: 0
>> Missing blocks: 0
>> 
>> -------------------------------------------------
>> Datanodes available: 5 (5 total, 0 dead)
>> 
>> 
>> 
>> 
>> -----Original Message-----
>> From: dhruv21@gmail.com [mailto:dhruv21@gmail.com] On Behalf Of Dhruv
>> Kumar
>> Sent: Thursday, July 07, 2011 10:01 AM
>> To: common-user@hadoop.apache.org
>> Subject: Re: HTTP Error
>> 
>> 1) Check with jps to see if all services are functioning.
>> 
>> 2) Have you tried appending dfshealth.jsp at the end of the URL as the
>> 404
>> says?
>> 
>> Try using this:
>> http://localhost:50070/dfshealth.jsp
>> 
>> 
>> 
>> On Thu, Jul 7, 2011 at 7:13 AM, Adarsh Sharma
>> <ad...@orkash.com>wrote:
>> 
>>  
>>> Dear all,
>>> 
>>> Today I am stucked with the strange problem in the running hadoop
>>>    
>> cluster.
>>  
>>> After starting hadoop by bin/start-all.sh, all nodes are started. But
>>>    
>> when
>>  
>>> I check through web UI ( MAster-Ip:50070), It shows :
>>> 
>>> 
>>>  HTTP ERROR: 404
>>> 
>>> /dfshealth.jsp
>>> 
>>> RequestURI=/dfshealth.jsp
>>> 
>>> /Powered by Jetty:// <http://jetty.mortbay.org/>
>>> /
>>> 
>>> /I check by command line that hadoop cannot able to get out of safe
>>>    
>> mode.
>>  
>>> /
>>> 
>>> /I know , manually command to leave safe mode
>>> /
>>> 
>>> /bin/hadoop dfsadmin -safemode leave
>>> /
>>> 
>>> /But How can I make hadoop  run properly and what are the reasons of
>>>    
>> this
>>  
>>> error
>>> /
>>> 
>>> /
>>> Thanks
>>> /
>>> 
>>> 
>>> 
>>>    
>> 
>>  
> 

Re: HTTP Error

Posted by Adarsh Sharma <ad...@orkash.com>.
Thanks , Still don't understand the issue.

My name node has repeatedly show these logs :

2011-07-08 09:36:31,365 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: 
ugi=hadoop,hadoop    ip=/MAster-IP   cmd=listStatus    
src=/home/hadoop/system    dst=null    perm=null
2011-07-08 09:36:31,367 INFO org.apache.hadoop.ipc.Server: IPC Server 
handler 2 on 9000, call delete(/home/hadoop/system, true) from 
Master-IP:53593: error: 
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete 
/home/hadoop/system. Name node is in safe mode.
The ratio of reported blocks 0.8293 has not reached the threshold 
0.9990. Safe mode will be turned off automatically.
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete 
/home/hadoop/system. Name node is in safe mode.
The ratio of reported blocks 0.8293 has not reached the threshold 
0.9990. Safe mode will be turned off automatically.
    at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1700)
    at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1680)
    at 
org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:517)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)


And one of my data node shows the below logs :

2011-07-08 09:49:56,967 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand action: 
DNA_REGISTER
2011-07-08 09:49:59,962 WARN 
org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is shutting 
down: org.apache.hadoop.ipc.RemoteException: 
org.apache.hadoop.hdfs.protocol.UnregisteredDatanodeException: Data node 
192.168.0.209:50010 is attempting to report storage ID 
DS-218695497-SLave_IP-50010-1303978807280. Node SLave_IP:50010 is 
expected to serve this storage.
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDatanode(FSNamesystem.java:3920)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.processReport(FSNamesystem.java:2891)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.blockReport(NameNode.java:715)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

        at org.apache.hadoop.ipc.Client.call(Client.java:740)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
        at $Proxy4.blockReport(Unknown Source)
        at 
org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:756)
        at 
org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1186)
        at java.lang.Thread.run(Thread.java:619)

2011-07-08 09:50:00,072 INFO org.apache.hadoop.ipc.Server: Stopping 
server on 50020
2011-07-08 09:50:00,072 INFO org.apache.hadoop.ipc.Server: IPC Server 
handler 1 on 50020: exiting
2011-07-08 09:50:00,074 INFO org.apache.hadoop.ipc.Server: IPC Server 
handler 2 on 50020: exiting
2011-07-08 09:50:00,074 INFO org.apache.hadoop.ipc.Server: IPC Server 
handler 0 on 50020: exiting
2011-07-08 09:50:00,076 INFO org.apache.hadoop.ipc.Server: Stopping IPC 
Server listener on 50020
2011-07-08 09:50:00,077 INFO org.apache.hadoop.ipc.Server: Stopping IPC 
Server Responder
2011-07-08 09:50:00,077 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup 
to exit, active threads is 1
2011-07-08 09:50:00,078 WARN 
org.apache.hadoop.hdfs.server.datanode.DataNode: 
DatanodeRegistration(SLave_IP:50010, 
storageID=DS-218695497-192.168.0.209-50010-1303978807280, 
infoPort=50075, ipcPort=50020):DataXceiveServer: 
java.nio.channels.AsynchronousCloseException
        at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
        at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
        at 
sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
        at java.lang.Thread.run(Thread.java:619)

2011-07-08 09:50:00,394 INFO 
org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Exiting 
DataBlockScanner thread.
2011-07-08 09:50:01,079 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup 
to exit, active threads is 0
2011-07-08 09:50:01,183 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: 
DatanodeRegistration(192.168.0.209:50010, 
storageID=DS-218695497-192.168.0.209-50010-1303978807280, 
infoPort=50075, ipcPort=50020):Finishing DataNode in: 
FSDataset{dirpath='/hdd1-1/data/current'}
2011-07-08 09:50:01,183 INFO org.apache.hadoop.ipc.Server: Stopping 
server on 50020
2011-07-08 09:50:01,183 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup 
to exit, active threads is 0
2011-07-08 09:50:01,185 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at ws14-suru-lin/

Also my dfsdmin report shows :

bash-3.2$ bin/hadoop dfsadmin -report
Safe mode is ON
Configured Capacity: 59069984768 (55.01 GB)
Present Capacity: 46471880704 (43.28 GB)
DFS Remaining: 45169745920 (42.07 GB)
DFS Used: 1302134784 (1.21 GB)
DFS Used%: 2.8%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)

Name: IP:50010
Decommission Status : Normal
Configured Capacity: 59069984768 (55.01 GB)
DFS Used: 1302134784 (1.21 GB)
Non DFS Used: 12598104064 (11.73 GB)
DFS Remaining: 45169745920(42.07 GB)
DFS Used%: 2.2%
DFS Remaining%: 76.47%
Last contact: Fri Jul 08 10:03:40 IST 2011

But I have 2 datanodes.Safe mode is on from the last 1 hour. I know the 
command to leave it manually.
I think the problem arises due to non start up of one of my datanodes. 
How could i solve this problem .

Also for

  HTTP ERROR: 404

/dfshealth.jsp

RequestURI=/dfshealth.jsp

/Powered by Jetty:// <http://jetty.mortbay.org/> error,

I manually check through below command at all nodes 

On Master :

ash-3.2$ /usr/java/jdk1.6.0_18/bin/jps 
7548 SecondaryNameNode
7395 NameNode
7628 JobTracker
7713 Jps

And also on slaves :

[root@ws33-shiv-lin ~]# /usr/java/jdk1.6.0_20/bin/jps 
5696 DataNode
5941 Jps
5818 TaskTracker




Thanks



Jeff.Schmitz@shell.com wrote:
> Adarsh,
>
> You could also run from command line
>
> [root@xxxxxxx bin]# ./hadoop dfsadmin -report
> Configured Capacity: 1151948095488 (1.05 TB)
> Present Capacity: 1059350446080 (986.6 GB)
> DFS Remaining: 1056175992832 (983.64 GB)
> DFS Used: 3174453248 (2.96 GB)
> DFS Used%: 0.3%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
>
> -------------------------------------------------
> Datanodes available: 5 (5 total, 0 dead)
>
>
>
>
> -----Original Message-----
> From: dhruv21@gmail.com [mailto:dhruv21@gmail.com] On Behalf Of Dhruv
> Kumar
> Sent: Thursday, July 07, 2011 10:01 AM
> To: common-user@hadoop.apache.org
> Subject: Re: HTTP Error
>
> 1) Check with jps to see if all services are functioning.
>
> 2) Have you tried appending dfshealth.jsp at the end of the URL as the
> 404
> says?
>
> Try using this:
> http://localhost:50070/dfshealth.jsp
>
>
>
> On Thu, Jul 7, 2011 at 7:13 AM, Adarsh Sharma
> <ad...@orkash.com>wrote:
>
>   
>> Dear all,
>>
>> Today I am stucked with the strange problem in the running hadoop
>>     
> cluster.
>   
>> After starting hadoop by bin/start-all.sh, all nodes are started. But
>>     
> when
>   
>> I check through web UI ( MAster-Ip:50070), It shows :
>>
>>
>>   HTTP ERROR: 404
>>
>> /dfshealth.jsp
>>
>> RequestURI=/dfshealth.jsp
>>
>> /Powered by Jetty:// <http://jetty.mortbay.org/>
>> /
>>
>> /I check by command line that hadoop cannot able to get out of safe
>>     
> mode.
>   
>> /
>>
>> /I know , manually command to leave safe mode
>> /
>>
>> /bin/hadoop dfsadmin -safemode leave
>> /
>>
>> /But How can I make hadoop  run properly and what are the reasons of
>>     
> this
>   
>> error
>> /
>>
>> /
>> Thanks
>> /
>>
>>
>>
>>     
>
>   


RE: HTTP Error

Posted by Je...@shell.com.
Adarsh,

You could also run from command line

[root@xxxxxxx bin]# ./hadoop dfsadmin -report
Configured Capacity: 1151948095488 (1.05 TB)
Present Capacity: 1059350446080 (986.6 GB)
DFS Remaining: 1056175992832 (983.64 GB)
DFS Used: 3174453248 (2.96 GB)
DFS Used%: 0.3%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 5 (5 total, 0 dead)




-----Original Message-----
From: dhruv21@gmail.com [mailto:dhruv21@gmail.com] On Behalf Of Dhruv
Kumar
Sent: Thursday, July 07, 2011 10:01 AM
To: common-user@hadoop.apache.org
Subject: Re: HTTP Error

1) Check with jps to see if all services are functioning.

2) Have you tried appending dfshealth.jsp at the end of the URL as the
404
says?

Try using this:
http://localhost:50070/dfshealth.jsp



On Thu, Jul 7, 2011 at 7:13 AM, Adarsh Sharma
<ad...@orkash.com>wrote:

> Dear all,
>
> Today I am stucked with the strange problem in the running hadoop
cluster.
>
> After starting hadoop by bin/start-all.sh, all nodes are started. But
when
> I check through web UI ( MAster-Ip:50070), It shows :
>
>
>   HTTP ERROR: 404
>
> /dfshealth.jsp
>
> RequestURI=/dfshealth.jsp
>
> /Powered by Jetty:// <http://jetty.mortbay.org/>
> /
>
> /I check by command line that hadoop cannot able to get out of safe
mode.
> /
>
> /I know , manually command to leave safe mode
> /
>
> /bin/hadoop dfsadmin -safemode leave
> /
>
> /But How can I make hadoop  run properly and what are the reasons of
this
> error
> /
>
> /
> Thanks
> /
>
>
>


Re: HTTP Error

Posted by Dhruv Kumar <dk...@ecs.umass.edu>.
1) Check with jps to see if all services are functioning.

2) Have you tried appending dfshealth.jsp at the end of the URL as the 404
says?

Try using this:
http://localhost:50070/dfshealth.jsp



On Thu, Jul 7, 2011 at 7:13 AM, Adarsh Sharma <ad...@orkash.com>wrote:

> Dear all,
>
> Today I am stucked with the strange problem in the running hadoop cluster.
>
> After starting hadoop by bin/start-all.sh, all nodes are started. But when
> I check through web UI ( MAster-Ip:50070), It shows :
>
>
>   HTTP ERROR: 404
>
> /dfshealth.jsp
>
> RequestURI=/dfshealth.jsp
>
> /Powered by Jetty:// <http://jetty.mortbay.org/>
> /
>
> /I check by command line that hadoop cannot able to get out of safe mode.
> /
>
> /I know , manually command to leave safe mode
> /
>
> /bin/hadoop dfsadmin -safemode leave
> /
>
> /But How can I make hadoop  run properly and what are the reasons of  this
> error
> /
>
> /
> Thanks
> /
>
>
>