You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by "AMARNATH, Balachandar" <BA...@airbus.com> on 2013/03/06 10:58:50 UTC

store file gives exception

Now I came out of the safe mode through admin command. I tried to put a file into hdfs and encountered this error.

org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/hosts could only be replicated to 0 nodes, instead of 1

Any hint to fix this,

This happens when the namenode is not datanode. Am I making sense?

With thanks and regards
Balachandar




The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.
If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.
Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.
All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.


Re: store file gives exception

Posted by Shumin Guo <gs...@gmail.com>.
Nitin is right. The hadoop Job tracker will schedule a job based on the
data block location and the computing power of the node.

Based on the number of data blocks, the job tracker will split a job into
map tasks. Optimally, map tasks should be scheduled on nodes with local
data. And also because one data block might be replicated on multiple
nodes, the job tracker will schedule a task for a data block using some
rules (such as graylist, scheduler etc.)

BTW, if you do want to check the locations of the data blocks on HDFS, you
can use the following command:

*hadoop fsck /user/ec2-user/randtext2/part-00000 -files -blocks -locations*

And the output should be similar to :
FSCK started by ec2-user from /10.147.166.55 for path
/user/ec2-user/randtext2/part-00000 at Wed Mar 06 10:32:51 EST 2013
/user/ec2-user/randtext2/part-00000 1102234512 bytes, 17 block(s):  OK
0. blk_-1304750065421421106_1311 len=67108864 repl=2 [10.145.223.184:50010,
10.152.166.137:50010]
1. blk_-2917797815235442294_1315 len=67108864 repl=2 [10.145.231.46:50010,
10.152.166.137:50010]

Shumin-

On Wed, Mar 6, 2013 at 7:35 AM, Nitin Pawar <ni...@gmail.com> wrote:

> in hadoop you don't have to worry about data locality. Hadoop job tracker
> will by default try to schedule the job where the data is located in case
> it has enough compute capacity. Also note that datanode just store the
> blocks of file and multiple datanodes will have different blocks of the
> file.
>
>
> On Wed, Mar 6, 2013 at 5:52 PM, AMARNATH, Balachandar <
> BALACHANDAR.AMARNATH@airbus.com> wrote:
>
>> Hi all,****
>>
>> ** **
>>
>> I thought the below issue is coming because of non availability of enough
>> space. Hence, I replaced the datanodes with other nodes with more space and
>> it worked. ****
>>
>> ** **
>>
>> Now, I have a working HDFS cluster. I am thinking of my application where
>> I need to execute ‘a set of similar instructions’  (job) over large number
>> of files. I am planning to do this in parallel in different machines. I
>> would like to schedule this job to the datanode that already has data input
>> file in it. At first, I shall store the files in HDFS.  Now, to complete my
>> task, Is there a scheduler available in hadoop framework that given the
>> input file required for a job, can return the data node name where the file
>> is actually stored?  Am I making sense here?****
>>
>> ** **
>>
>> Regards****
>>
>> Bala ****
>>
>> ** **
>>
>> ** **
>>
>> ** **
>>
>> *From:* AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
>> *Sent:* 06 March 2013 16:49
>> *To:* user@hadoop.apache.org
>> *Subject:* RE: store file gives exception****
>>
>> ** **
>>
>> Hi, ****
>>
>> ** **
>>
>> I could successfully install hadoop cluster with three nodes (2 datanodes
>> and 1 namenode). However, when I tried to store a file, I get the following
>> error.****
>>
>> ** **
>>
>> 13/03/06 16:45:56 WARN hdfs.DFSClient: Error Recovery for block null bad
>> datanode[0] nodes == null****
>>
>> 13/03/06 16:45:56 WARN hdfs.DFSClient: Could not get block locations.
>> Source file "/user/bala/kumki/hosts" - Aborting...****
>>
>> put: java.io.IOException: File /user/bala/kumki/hosts could only be
>> replicated to 0 nodes, instead of 1****
>>
>> 13/03/06 16:45:56 ERROR hdfs.DFSClient: Exception closing file
>> /user/bala/kumki/hosts : org.apache.hadoop.ipc.RemoteException:
>> java.io.IOException: File /user/bala/kumki/hosts could only be replicated
>> to 0 nodes, instead of 1****
>>
>>             at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
>> ****
>>
>>             at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
>> ****
>>
>>             at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> ****
>>
>>             at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> ****
>>
>>             at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> ****
>>
>>             at java.lang.reflect.Method.invoke(Method.java:597)****
>>
>>             at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)****
>>
>>             at
>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)****
>>
>>             at
>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)****
>>
>>             at java.security.AccessController.doPrivileged(Native Method)
>> ****
>>
>>             at javax.security.auth.Subject.doAs(Subject.java:396)****
>>
>>             at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>> ****
>>
>>             at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
>> ****
>>
>> ** **
>>
>> Any hint to fix this,****
>>
>> ** **
>>
>> ** **
>>
>> Regards****
>>
>> Bala****
>>
>> *From:* AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
>> *Sent:* 06 March 2013 15:29
>> *To:* user@hadoop.apache.org
>> *Subject:* store file gives exception****
>>
>> ** **
>>
>> Now I came out of the safe mode through admin command. I tried to put a
>> file into hdfs and encountered this error.****
>>
>>  ****
>>
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
>> /user/hadoop/hosts could only be replicated to 0 nodes, instead of 1****
>>
>>  ****
>>
>> Any hint to fix this,****
>>
>>  ****
>>
>> This happens when the namenode is not datanode. Am I making sense?****
>>
>>  ****
>>
>> With thanks and regards****
>>
>> Balachandar****
>>
>>  ****
>>
>>  ****
>>
>>  ****
>>
>> The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.****
>>
>> If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.****
>>
>> Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.****
>>
>> All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.****
>>
>> The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.****
>>
>> If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.****
>>
>> Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.****
>>
>> All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.****
>>
>> The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.
>> If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.
>> Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.
>> All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.
>>
>>
>
>
> --
> Nitin Pawar
>

Re:

Posted by Jean-Marc Spaggiari <je...@spaggiari.org>.
Hi Ashish,

It's operation you have to do on your side....

Have you tried google?

https://www.google.ca/search?q=unsubscribe+hadoop.apache.org&aq=f&oq=unsubscribe+hadoop.apache.org&aqs=chrome.0.57.2271&sourceid=chrome&ie=UTF-8

JM

2013/3/6  <as...@students.iitmandi.ac.in>:
> Unsubscribe me....
> How many more times, I have to mail u????????????????????
>

Re:

Posted by Panshul Whisper <ou...@gmail.com>.
lol...
as long as u dnt mail to

user-unsubscribe@hadoop.apache.org

noobs...


On Wed, Mar 6, 2013 at 2:03 PM,
<as...@students.iitmandi.ac.in>wrote:

> Unsubscribe me....
> How many more times, I have to mail u????????????????????
>
>


-- 
Regards,
Ouch Whisper
010101010101

Re:

Posted by Panshul Whisper <ou...@gmail.com>.
lol...
as long as u dnt mail to

user-unsubscribe@hadoop.apache.org

noobs...


On Wed, Mar 6, 2013 at 2:03 PM,
<as...@students.iitmandi.ac.in>wrote:

> Unsubscribe me....
> How many more times, I have to mail u????????????????????
>
>


-- 
Regards,
Ouch Whisper
010101010101

Re: [unsubscribe noobs]

Posted by Panshul Whisper <ou...@gmail.com>.
lol... ahahah this was awesome..!!


On Wed, Mar 6, 2013 at 3:23 PM, Fabio Pitzolu <fa...@gr-ci.com>wrote:

> May I remind you of this simple law of IT?****
>
> ** **
>
> ****
>
> ** **
>
> *-- Fabio Pitzolu*****
>
> ** **
>



-- 
Regards,
Ouch Whisper
010101010101

Re: [unsubscribe noobs]

Posted by Panshul Whisper <ou...@gmail.com>.
lol... ahahah this was awesome..!!


On Wed, Mar 6, 2013 at 3:23 PM, Fabio Pitzolu <fa...@gr-ci.com>wrote:

> May I remind you of this simple law of IT?****
>
> ** **
>
> ****
>
> ** **
>
> *-- Fabio Pitzolu*****
>
> ** **
>



-- 
Regards,
Ouch Whisper
010101010101

Re: [unsubscribe noobs]

Posted by Panshul Whisper <ou...@gmail.com>.
lol... ahahah this was awesome..!!


On Wed, Mar 6, 2013 at 3:23 PM, Fabio Pitzolu <fa...@gr-ci.com>wrote:

> May I remind you of this simple law of IT?****
>
> ** **
>
> ****
>
> ** **
>
> *-- Fabio Pitzolu*****
>
> ** **
>



-- 
Regards,
Ouch Whisper
010101010101

Re: [unsubscribe noobs]

Posted by Panshul Whisper <ou...@gmail.com>.
lol... ahahah this was awesome..!!


On Wed, Mar 6, 2013 at 3:23 PM, Fabio Pitzolu <fa...@gr-ci.com>wrote:

> May I remind you of this simple law of IT?****
>
> ** **
>
> ****
>
> ** **
>
> *-- Fabio Pitzolu*****
>
> ** **
>



-- 
Regards,
Ouch Whisper
010101010101

RE: [unsubscribe noobs]

Posted by Fabio Pitzolu <fa...@gr-ci.com>.
May I remind you of this simple law of IT?

 



 

-- Fabio Pitzolu

 


RE: [unsubscribe noobs]

Posted by Fabio Pitzolu <fa...@gr-ci.com>.
May I remind you of this simple law of IT?

 



 

-- Fabio Pitzolu

 


RE: [unsubscribe noobs]

Posted by Fabio Pitzolu <fa...@gr-ci.com>.
May I remind you of this simple law of IT?

 



 

-- Fabio Pitzolu

 


RE: [unsubscribe noobs]

Posted by Fabio Pitzolu <fa...@gr-ci.com>.
May I remind you of this simple law of IT?

 



 

-- Fabio Pitzolu

 


Re: [unsubscribe noobs]

Posted by Mike Spreitzer <ms...@us.ibm.com>.
The question is, how much more of this must we endure before the mailing 
list server gets smarter?  How about making it respond to any short 
message that includes the word "unsubscribe" with a message reminding the 
noob how to manage his subscription and how to send an email with the word 
"unsubscribe" that will be delivered?

Re: [unsubscribe noobs]

Posted by Mike Spreitzer <ms...@us.ibm.com>.
The question is, how much more of this must we endure before the mailing 
list server gets smarter?  How about making it respond to any short 
message that includes the word "unsubscribe" with a message reminding the 
noob how to manage his subscription and how to send an email with the word 
"unsubscribe" that will be delivered?

Re: [unsubscribe noobs]

Posted by Mike Spreitzer <ms...@us.ibm.com>.
The question is, how much more of this must we endure before the mailing 
list server gets smarter?  How about making it respond to any short 
message that includes the word "unsubscribe" with a message reminding the 
noob how to manage his subscription and how to send an email with the word 
"unsubscribe" that will be delivered?

Re: [unsubscribe noobs]

Posted by Mike Spreitzer <ms...@us.ibm.com>.
The question is, how much more of this must we endure before the mailing 
list server gets smarter?  How about making it respond to any short 
message that includes the word "unsubscribe" with a message reminding the 
noob how to manage his subscription and how to send an email with the word 
"unsubscribe" that will be delivered?

Re:

Posted by Kai Voigt <k...@123.org>.
In my opinion, another 2782829 times, give or take a few.

Or try reading and understanding http://hadoop.apache.org/mailing_lists.html otherwise which tells you to send an email to user-unsubscribe@hadoop.apache.org

Cheers
Kai

Am 06.03.2013 um 14:03 schrieb ashish_kumar_gupta@students.iitmandi.ac.in:

> Unsubscribe me....
> How many more times, I have to mail u????????????????????
> 
> 

-- 
Kai Voigt
k@123.org





Re:

Posted by Jean-Marc Spaggiari <je...@spaggiari.org>.
Hi Ashish,

It's operation you have to do on your side....

Have you tried google?

https://www.google.ca/search?q=unsubscribe+hadoop.apache.org&aq=f&oq=unsubscribe+hadoop.apache.org&aqs=chrome.0.57.2271&sourceid=chrome&ie=UTF-8

JM

2013/3/6  <as...@students.iitmandi.ac.in>:
> Unsubscribe me....
> How many more times, I have to mail u????????????????????
>

Re:

Posted by Kai Voigt <k...@123.org>.
In my opinion, another 2782829 times, give or take a few.

Or try reading and understanding http://hadoop.apache.org/mailing_lists.html otherwise which tells you to send an email to user-unsubscribe@hadoop.apache.org

Cheers
Kai

Am 06.03.2013 um 14:03 schrieb ashish_kumar_gupta@students.iitmandi.ac.in:

> Unsubscribe me....
> How many more times, I have to mail u????????????????????
> 
> 

-- 
Kai Voigt
k@123.org





Re:

Posted by Panshul Whisper <ou...@gmail.com>.
lol...
as long as u dnt mail to

user-unsubscribe@hadoop.apache.org

noobs...


On Wed, Mar 6, 2013 at 2:03 PM,
<as...@students.iitmandi.ac.in>wrote:

> Unsubscribe me....
> How many more times, I have to mail u????????????????????
>
>


-- 
Regards,
Ouch Whisper
010101010101

Re:

Posted by Kai Voigt <k...@123.org>.
In my opinion, another 2782829 times, give or take a few.

Or try reading and understanding http://hadoop.apache.org/mailing_lists.html otherwise which tells you to send an email to user-unsubscribe@hadoop.apache.org

Cheers
Kai

Am 06.03.2013 um 14:03 schrieb ashish_kumar_gupta@students.iitmandi.ac.in:

> Unsubscribe me....
> How many more times, I have to mail u????????????????????
> 
> 

-- 
Kai Voigt
k@123.org





Re:

Posted by Jean-Marc Spaggiari <je...@spaggiari.org>.
Hi Ashish,

It's operation you have to do on your side....

Have you tried google?

https://www.google.ca/search?q=unsubscribe+hadoop.apache.org&aq=f&oq=unsubscribe+hadoop.apache.org&aqs=chrome.0.57.2271&sourceid=chrome&ie=UTF-8

JM

2013/3/6  <as...@students.iitmandi.ac.in>:
> Unsubscribe me....
> How many more times, I have to mail u????????????????????
>

Re:

Posted by Panshul Whisper <ou...@gmail.com>.
lol...
as long as u dnt mail to

user-unsubscribe@hadoop.apache.org

noobs...


On Wed, Mar 6, 2013 at 2:03 PM,
<as...@students.iitmandi.ac.in>wrote:

> Unsubscribe me....
> How many more times, I have to mail u????????????????????
>
>


-- 
Regards,
Ouch Whisper
010101010101

Re:

Posted by Jean-Marc Spaggiari <je...@spaggiari.org>.
Hi Ashish,

It's operation you have to do on your side....

Have you tried google?

https://www.google.ca/search?q=unsubscribe+hadoop.apache.org&aq=f&oq=unsubscribe+hadoop.apache.org&aqs=chrome.0.57.2271&sourceid=chrome&ie=UTF-8

JM

2013/3/6  <as...@students.iitmandi.ac.in>:
> Unsubscribe me....
> How many more times, I have to mail u????????????????????
>

Re:

Posted by Kai Voigt <k...@123.org>.
In my opinion, another 2782829 times, give or take a few.

Or try reading and understanding http://hadoop.apache.org/mailing_lists.html otherwise which tells you to send an email to user-unsubscribe@hadoop.apache.org

Cheers
Kai

Am 06.03.2013 um 14:03 schrieb ashish_kumar_gupta@students.iitmandi.ac.in:

> Unsubscribe me....
> How many more times, I have to mail u????????????????????
> 
> 

-- 
Kai Voigt
k@123.org





(Unknown)

Posted by as...@students.iitmandi.ac.in.
Unsubscribe me....
How many more times, I have to mail u????????????????????


Re: store file gives exception

Posted by Shumin Guo <gs...@gmail.com>.
Nitin is right. The hadoop Job tracker will schedule a job based on the
data block location and the computing power of the node.

Based on the number of data blocks, the job tracker will split a job into
map tasks. Optimally, map tasks should be scheduled on nodes with local
data. And also because one data block might be replicated on multiple
nodes, the job tracker will schedule a task for a data block using some
rules (such as graylist, scheduler etc.)

BTW, if you do want to check the locations of the data blocks on HDFS, you
can use the following command:

*hadoop fsck /user/ec2-user/randtext2/part-00000 -files -blocks -locations*

And the output should be similar to :
FSCK started by ec2-user from /10.147.166.55 for path
/user/ec2-user/randtext2/part-00000 at Wed Mar 06 10:32:51 EST 2013
/user/ec2-user/randtext2/part-00000 1102234512 bytes, 17 block(s):  OK
0. blk_-1304750065421421106_1311 len=67108864 repl=2 [10.145.223.184:50010,
10.152.166.137:50010]
1. blk_-2917797815235442294_1315 len=67108864 repl=2 [10.145.231.46:50010,
10.152.166.137:50010]

Shumin-

On Wed, Mar 6, 2013 at 7:35 AM, Nitin Pawar <ni...@gmail.com> wrote:

> in hadoop you don't have to worry about data locality. Hadoop job tracker
> will by default try to schedule the job where the data is located in case
> it has enough compute capacity. Also note that datanode just store the
> blocks of file and multiple datanodes will have different blocks of the
> file.
>
>
> On Wed, Mar 6, 2013 at 5:52 PM, AMARNATH, Balachandar <
> BALACHANDAR.AMARNATH@airbus.com> wrote:
>
>> Hi all,****
>>
>> ** **
>>
>> I thought the below issue is coming because of non availability of enough
>> space. Hence, I replaced the datanodes with other nodes with more space and
>> it worked. ****
>>
>> ** **
>>
>> Now, I have a working HDFS cluster. I am thinking of my application where
>> I need to execute ‘a set of similar instructions’  (job) over large number
>> of files. I am planning to do this in parallel in different machines. I
>> would like to schedule this job to the datanode that already has data input
>> file in it. At first, I shall store the files in HDFS.  Now, to complete my
>> task, Is there a scheduler available in hadoop framework that given the
>> input file required for a job, can return the data node name where the file
>> is actually stored?  Am I making sense here?****
>>
>> ** **
>>
>> Regards****
>>
>> Bala ****
>>
>> ** **
>>
>> ** **
>>
>> ** **
>>
>> *From:* AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
>> *Sent:* 06 March 2013 16:49
>> *To:* user@hadoop.apache.org
>> *Subject:* RE: store file gives exception****
>>
>> ** **
>>
>> Hi, ****
>>
>> ** **
>>
>> I could successfully install hadoop cluster with three nodes (2 datanodes
>> and 1 namenode). However, when I tried to store a file, I get the following
>> error.****
>>
>> ** **
>>
>> 13/03/06 16:45:56 WARN hdfs.DFSClient: Error Recovery for block null bad
>> datanode[0] nodes == null****
>>
>> 13/03/06 16:45:56 WARN hdfs.DFSClient: Could not get block locations.
>> Source file "/user/bala/kumki/hosts" - Aborting...****
>>
>> put: java.io.IOException: File /user/bala/kumki/hosts could only be
>> replicated to 0 nodes, instead of 1****
>>
>> 13/03/06 16:45:56 ERROR hdfs.DFSClient: Exception closing file
>> /user/bala/kumki/hosts : org.apache.hadoop.ipc.RemoteException:
>> java.io.IOException: File /user/bala/kumki/hosts could only be replicated
>> to 0 nodes, instead of 1****
>>
>>             at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
>> ****
>>
>>             at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
>> ****
>>
>>             at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> ****
>>
>>             at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> ****
>>
>>             at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> ****
>>
>>             at java.lang.reflect.Method.invoke(Method.java:597)****
>>
>>             at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)****
>>
>>             at
>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)****
>>
>>             at
>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)****
>>
>>             at java.security.AccessController.doPrivileged(Native Method)
>> ****
>>
>>             at javax.security.auth.Subject.doAs(Subject.java:396)****
>>
>>             at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>> ****
>>
>>             at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
>> ****
>>
>> ** **
>>
>> Any hint to fix this,****
>>
>> ** **
>>
>> ** **
>>
>> Regards****
>>
>> Bala****
>>
>> *From:* AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
>> *Sent:* 06 March 2013 15:29
>> *To:* user@hadoop.apache.org
>> *Subject:* store file gives exception****
>>
>> ** **
>>
>> Now I came out of the safe mode through admin command. I tried to put a
>> file into hdfs and encountered this error.****
>>
>>  ****
>>
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
>> /user/hadoop/hosts could only be replicated to 0 nodes, instead of 1****
>>
>>  ****
>>
>> Any hint to fix this,****
>>
>>  ****
>>
>> This happens when the namenode is not datanode. Am I making sense?****
>>
>>  ****
>>
>> With thanks and regards****
>>
>> Balachandar****
>>
>>  ****
>>
>>  ****
>>
>>  ****
>>
>> The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.****
>>
>> If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.****
>>
>> Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.****
>>
>> All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.****
>>
>> The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.****
>>
>> If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.****
>>
>> Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.****
>>
>> All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.****
>>
>> The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.
>> If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.
>> Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.
>> All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.
>>
>>
>
>
> --
> Nitin Pawar
>

(Unknown)

Posted by as...@students.iitmandi.ac.in.
Unsubscribe me....
How many more times, I have to mail u????????????????????


Re: store file gives exception

Posted by Shumin Guo <gs...@gmail.com>.
Nitin is right. The hadoop Job tracker will schedule a job based on the
data block location and the computing power of the node.

Based on the number of data blocks, the job tracker will split a job into
map tasks. Optimally, map tasks should be scheduled on nodes with local
data. And also because one data block might be replicated on multiple
nodes, the job tracker will schedule a task for a data block using some
rules (such as graylist, scheduler etc.)

BTW, if you do want to check the locations of the data blocks on HDFS, you
can use the following command:

*hadoop fsck /user/ec2-user/randtext2/part-00000 -files -blocks -locations*

And the output should be similar to :
FSCK started by ec2-user from /10.147.166.55 for path
/user/ec2-user/randtext2/part-00000 at Wed Mar 06 10:32:51 EST 2013
/user/ec2-user/randtext2/part-00000 1102234512 bytes, 17 block(s):  OK
0. blk_-1304750065421421106_1311 len=67108864 repl=2 [10.145.223.184:50010,
10.152.166.137:50010]
1. blk_-2917797815235442294_1315 len=67108864 repl=2 [10.145.231.46:50010,
10.152.166.137:50010]

Shumin-

On Wed, Mar 6, 2013 at 7:35 AM, Nitin Pawar <ni...@gmail.com> wrote:

> in hadoop you don't have to worry about data locality. Hadoop job tracker
> will by default try to schedule the job where the data is located in case
> it has enough compute capacity. Also note that datanode just store the
> blocks of file and multiple datanodes will have different blocks of the
> file.
>
>
> On Wed, Mar 6, 2013 at 5:52 PM, AMARNATH, Balachandar <
> BALACHANDAR.AMARNATH@airbus.com> wrote:
>
>> Hi all,****
>>
>> ** **
>>
>> I thought the below issue is coming because of non availability of enough
>> space. Hence, I replaced the datanodes with other nodes with more space and
>> it worked. ****
>>
>> ** **
>>
>> Now, I have a working HDFS cluster. I am thinking of my application where
>> I need to execute ‘a set of similar instructions’  (job) over large number
>> of files. I am planning to do this in parallel in different machines. I
>> would like to schedule this job to the datanode that already has data input
>> file in it. At first, I shall store the files in HDFS.  Now, to complete my
>> task, Is there a scheduler available in hadoop framework that given the
>> input file required for a job, can return the data node name where the file
>> is actually stored?  Am I making sense here?****
>>
>> ** **
>>
>> Regards****
>>
>> Bala ****
>>
>> ** **
>>
>> ** **
>>
>> ** **
>>
>> *From:* AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
>> *Sent:* 06 March 2013 16:49
>> *To:* user@hadoop.apache.org
>> *Subject:* RE: store file gives exception****
>>
>> ** **
>>
>> Hi, ****
>>
>> ** **
>>
>> I could successfully install hadoop cluster with three nodes (2 datanodes
>> and 1 namenode). However, when I tried to store a file, I get the following
>> error.****
>>
>> ** **
>>
>> 13/03/06 16:45:56 WARN hdfs.DFSClient: Error Recovery for block null bad
>> datanode[0] nodes == null****
>>
>> 13/03/06 16:45:56 WARN hdfs.DFSClient: Could not get block locations.
>> Source file "/user/bala/kumki/hosts" - Aborting...****
>>
>> put: java.io.IOException: File /user/bala/kumki/hosts could only be
>> replicated to 0 nodes, instead of 1****
>>
>> 13/03/06 16:45:56 ERROR hdfs.DFSClient: Exception closing file
>> /user/bala/kumki/hosts : org.apache.hadoop.ipc.RemoteException:
>> java.io.IOException: File /user/bala/kumki/hosts could only be replicated
>> to 0 nodes, instead of 1****
>>
>>             at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
>> ****
>>
>>             at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
>> ****
>>
>>             at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> ****
>>
>>             at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> ****
>>
>>             at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> ****
>>
>>             at java.lang.reflect.Method.invoke(Method.java:597)****
>>
>>             at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)****
>>
>>             at
>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)****
>>
>>             at
>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)****
>>
>>             at java.security.AccessController.doPrivileged(Native Method)
>> ****
>>
>>             at javax.security.auth.Subject.doAs(Subject.java:396)****
>>
>>             at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>> ****
>>
>>             at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
>> ****
>>
>> ** **
>>
>> Any hint to fix this,****
>>
>> ** **
>>
>> ** **
>>
>> Regards****
>>
>> Bala****
>>
>> *From:* AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
>> *Sent:* 06 March 2013 15:29
>> *To:* user@hadoop.apache.org
>> *Subject:* store file gives exception****
>>
>> ** **
>>
>> Now I came out of the safe mode through admin command. I tried to put a
>> file into hdfs and encountered this error.****
>>
>>  ****
>>
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
>> /user/hadoop/hosts could only be replicated to 0 nodes, instead of 1****
>>
>>  ****
>>
>> Any hint to fix this,****
>>
>>  ****
>>
>> This happens when the namenode is not datanode. Am I making sense?****
>>
>>  ****
>>
>> With thanks and regards****
>>
>> Balachandar****
>>
>>  ****
>>
>>  ****
>>
>>  ****
>>
>> The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.****
>>
>> If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.****
>>
>> Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.****
>>
>> All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.****
>>
>> The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.****
>>
>> If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.****
>>
>> Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.****
>>
>> All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.****
>>
>> The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.
>> If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.
>> Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.
>> All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.
>>
>>
>
>
> --
> Nitin Pawar
>

(Unknown)

Posted by as...@students.iitmandi.ac.in.
Unsubscribe me....
How many more times, I have to mail u????????????????????


Re: store file gives exception

Posted by Shumin Guo <gs...@gmail.com>.
Nitin is right. The hadoop Job tracker will schedule a job based on the
data block location and the computing power of the node.

Based on the number of data blocks, the job tracker will split a job into
map tasks. Optimally, map tasks should be scheduled on nodes with local
data. And also because one data block might be replicated on multiple
nodes, the job tracker will schedule a task for a data block using some
rules (such as graylist, scheduler etc.)

BTW, if you do want to check the locations of the data blocks on HDFS, you
can use the following command:

*hadoop fsck /user/ec2-user/randtext2/part-00000 -files -blocks -locations*

And the output should be similar to :
FSCK started by ec2-user from /10.147.166.55 for path
/user/ec2-user/randtext2/part-00000 at Wed Mar 06 10:32:51 EST 2013
/user/ec2-user/randtext2/part-00000 1102234512 bytes, 17 block(s):  OK
0. blk_-1304750065421421106_1311 len=67108864 repl=2 [10.145.223.184:50010,
10.152.166.137:50010]
1. blk_-2917797815235442294_1315 len=67108864 repl=2 [10.145.231.46:50010,
10.152.166.137:50010]

Shumin-

On Wed, Mar 6, 2013 at 7:35 AM, Nitin Pawar <ni...@gmail.com> wrote:

> in hadoop you don't have to worry about data locality. Hadoop job tracker
> will by default try to schedule the job where the data is located in case
> it has enough compute capacity. Also note that datanode just store the
> blocks of file and multiple datanodes will have different blocks of the
> file.
>
>
> On Wed, Mar 6, 2013 at 5:52 PM, AMARNATH, Balachandar <
> BALACHANDAR.AMARNATH@airbus.com> wrote:
>
>> Hi all,****
>>
>> ** **
>>
>> I thought the below issue is coming because of non availability of enough
>> space. Hence, I replaced the datanodes with other nodes with more space and
>> it worked. ****
>>
>> ** **
>>
>> Now, I have a working HDFS cluster. I am thinking of my application where
>> I need to execute ‘a set of similar instructions’  (job) over large number
>> of files. I am planning to do this in parallel in different machines. I
>> would like to schedule this job to the datanode that already has data input
>> file in it. At first, I shall store the files in HDFS.  Now, to complete my
>> task, Is there a scheduler available in hadoop framework that given the
>> input file required for a job, can return the data node name where the file
>> is actually stored?  Am I making sense here?****
>>
>> ** **
>>
>> Regards****
>>
>> Bala ****
>>
>> ** **
>>
>> ** **
>>
>> ** **
>>
>> *From:* AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
>> *Sent:* 06 March 2013 16:49
>> *To:* user@hadoop.apache.org
>> *Subject:* RE: store file gives exception****
>>
>> ** **
>>
>> Hi, ****
>>
>> ** **
>>
>> I could successfully install hadoop cluster with three nodes (2 datanodes
>> and 1 namenode). However, when I tried to store a file, I get the following
>> error.****
>>
>> ** **
>>
>> 13/03/06 16:45:56 WARN hdfs.DFSClient: Error Recovery for block null bad
>> datanode[0] nodes == null****
>>
>> 13/03/06 16:45:56 WARN hdfs.DFSClient: Could not get block locations.
>> Source file "/user/bala/kumki/hosts" - Aborting...****
>>
>> put: java.io.IOException: File /user/bala/kumki/hosts could only be
>> replicated to 0 nodes, instead of 1****
>>
>> 13/03/06 16:45:56 ERROR hdfs.DFSClient: Exception closing file
>> /user/bala/kumki/hosts : org.apache.hadoop.ipc.RemoteException:
>> java.io.IOException: File /user/bala/kumki/hosts could only be replicated
>> to 0 nodes, instead of 1****
>>
>>             at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
>> ****
>>
>>             at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
>> ****
>>
>>             at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> ****
>>
>>             at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> ****
>>
>>             at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> ****
>>
>>             at java.lang.reflect.Method.invoke(Method.java:597)****
>>
>>             at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)****
>>
>>             at
>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)****
>>
>>             at
>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)****
>>
>>             at java.security.AccessController.doPrivileged(Native Method)
>> ****
>>
>>             at javax.security.auth.Subject.doAs(Subject.java:396)****
>>
>>             at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>> ****
>>
>>             at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
>> ****
>>
>> ** **
>>
>> Any hint to fix this,****
>>
>> ** **
>>
>> ** **
>>
>> Regards****
>>
>> Bala****
>>
>> *From:* AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
>> *Sent:* 06 March 2013 15:29
>> *To:* user@hadoop.apache.org
>> *Subject:* store file gives exception****
>>
>> ** **
>>
>> Now I came out of the safe mode through admin command. I tried to put a
>> file into hdfs and encountered this error.****
>>
>>  ****
>>
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
>> /user/hadoop/hosts could only be replicated to 0 nodes, instead of 1****
>>
>>  ****
>>
>> Any hint to fix this,****
>>
>>  ****
>>
>> This happens when the namenode is not datanode. Am I making sense?****
>>
>>  ****
>>
>> With thanks and regards****
>>
>> Balachandar****
>>
>>  ****
>>
>>  ****
>>
>>  ****
>>
>> The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.****
>>
>> If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.****
>>
>> Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.****
>>
>> All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.****
>>
>> The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.****
>>
>> If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.****
>>
>> Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.****
>>
>> All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.****
>>
>> The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.
>> If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.
>> Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.
>> All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.
>>
>>
>
>
> --
> Nitin Pawar
>

(Unknown)

Posted by as...@students.iitmandi.ac.in.
Unsubscribe me....
How many more times, I have to mail u????????????????????


Re: store file gives exception

Posted by Nitin Pawar <ni...@gmail.com>.
in hadoop you don't have to worry about data locality. Hadoop job tracker
will by default try to schedule the job where the data is located in case
it has enough compute capacity. Also note that datanode just store the
blocks of file and multiple datanodes will have different blocks of the
file.


On Wed, Mar 6, 2013 at 5:52 PM, AMARNATH, Balachandar <
BALACHANDAR.AMARNATH@airbus.com> wrote:

> Hi all,****
>
> ** **
>
> I thought the below issue is coming because of non availability of enough
> space. Hence, I replaced the datanodes with other nodes with more space and
> it worked. ****
>
> ** **
>
> Now, I have a working HDFS cluster. I am thinking of my application where
> I need to execute ‘a set of similar instructions’  (job) over large number
> of files. I am planning to do this in parallel in different machines. I
> would like to schedule this job to the datanode that already has data input
> file in it. At first, I shall store the files in HDFS.  Now, to complete my
> task, Is there a scheduler available in hadoop framework that given the
> input file required for a job, can return the data node name where the file
> is actually stored?  Am I making sense here?****
>
> ** **
>
> Regards****
>
> Bala ****
>
> ** **
>
> ** **
>
> ** **
>
> *From:* AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
> *Sent:* 06 March 2013 16:49
> *To:* user@hadoop.apache.org
> *Subject:* RE: store file gives exception****
>
> ** **
>
> Hi, ****
>
> ** **
>
> I could successfully install hadoop cluster with three nodes (2 datanodes
> and 1 namenode). However, when I tried to store a file, I get the following
> error.****
>
> ** **
>
> 13/03/06 16:45:56 WARN hdfs.DFSClient: Error Recovery for block null bad
> datanode[0] nodes == null****
>
> 13/03/06 16:45:56 WARN hdfs.DFSClient: Could not get block locations.
> Source file "/user/bala/kumki/hosts" - Aborting...****
>
> put: java.io.IOException: File /user/bala/kumki/hosts could only be
> replicated to 0 nodes, instead of 1****
>
> 13/03/06 16:45:56 ERROR hdfs.DFSClient: Exception closing file
> /user/bala/kumki/hosts : org.apache.hadoop.ipc.RemoteException:
> java.io.IOException: File /user/bala/kumki/hosts could only be replicated
> to 0 nodes, instead of 1****
>
>             at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
> ****
>
>             at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
> ****
>
>             at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ****
>
>             at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> ****
>
>             at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> ****
>
>             at java.lang.reflect.Method.invoke(Method.java:597)****
>
>             at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)****
>
>             at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> ****
>
>             at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> ****
>
>             at java.security.AccessController.doPrivileged(Native Method)*
> ***
>
>             at javax.security.auth.Subject.doAs(Subject.java:396)****
>
>             at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
> ****
>
>             at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)*
> ***
>
> ** **
>
> Any hint to fix this,****
>
> ** **
>
> ** **
>
> Regards****
>
> Bala****
>
> *From:* AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
> *Sent:* 06 March 2013 15:29
> *To:* user@hadoop.apache.org
> *Subject:* store file gives exception****
>
> ** **
>
> Now I came out of the safe mode through admin command. I tried to put a
> file into hdfs and encountered this error.****
>
>  ****
>
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /user/hadoop/hosts could only be replicated to 0 nodes, instead of 1****
>
>  ****
>
> Any hint to fix this,****
>
>  ****
>
> This happens when the namenode is not datanode. Am I making sense?****
>
>  ****
>
> With thanks and regards****
>
> Balachandar****
>
>  ****
>
>  ****
>
>  ****
>
> The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.****
>
> If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.****
>
> Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.****
>
> All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.****
>
> The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.****
>
> If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.****
>
> Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.****
>
> All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.****
>
> The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.
> If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.
> Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.
> All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.
>
>


-- 
Nitin Pawar

Re: store file gives exception

Posted by Nitin Pawar <ni...@gmail.com>.
in hadoop you don't have to worry about data locality. Hadoop job tracker
will by default try to schedule the job where the data is located in case
it has enough compute capacity. Also note that datanode just store the
blocks of file and multiple datanodes will have different blocks of the
file.


On Wed, Mar 6, 2013 at 5:52 PM, AMARNATH, Balachandar <
BALACHANDAR.AMARNATH@airbus.com> wrote:

> Hi all,****
>
> ** **
>
> I thought the below issue is coming because of non availability of enough
> space. Hence, I replaced the datanodes with other nodes with more space and
> it worked. ****
>
> ** **
>
> Now, I have a working HDFS cluster. I am thinking of my application where
> I need to execute ‘a set of similar instructions’  (job) over large number
> of files. I am planning to do this in parallel in different machines. I
> would like to schedule this job to the datanode that already has data input
> file in it. At first, I shall store the files in HDFS.  Now, to complete my
> task, Is there a scheduler available in hadoop framework that given the
> input file required for a job, can return the data node name where the file
> is actually stored?  Am I making sense here?****
>
> ** **
>
> Regards****
>
> Bala ****
>
> ** **
>
> ** **
>
> ** **
>
> *From:* AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
> *Sent:* 06 March 2013 16:49
> *To:* user@hadoop.apache.org
> *Subject:* RE: store file gives exception****
>
> ** **
>
> Hi, ****
>
> ** **
>
> I could successfully install hadoop cluster with three nodes (2 datanodes
> and 1 namenode). However, when I tried to store a file, I get the following
> error.****
>
> ** **
>
> 13/03/06 16:45:56 WARN hdfs.DFSClient: Error Recovery for block null bad
> datanode[0] nodes == null****
>
> 13/03/06 16:45:56 WARN hdfs.DFSClient: Could not get block locations.
> Source file "/user/bala/kumki/hosts" - Aborting...****
>
> put: java.io.IOException: File /user/bala/kumki/hosts could only be
> replicated to 0 nodes, instead of 1****
>
> 13/03/06 16:45:56 ERROR hdfs.DFSClient: Exception closing file
> /user/bala/kumki/hosts : org.apache.hadoop.ipc.RemoteException:
> java.io.IOException: File /user/bala/kumki/hosts could only be replicated
> to 0 nodes, instead of 1****
>
>             at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
> ****
>
>             at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
> ****
>
>             at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ****
>
>             at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> ****
>
>             at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> ****
>
>             at java.lang.reflect.Method.invoke(Method.java:597)****
>
>             at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)****
>
>             at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> ****
>
>             at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> ****
>
>             at java.security.AccessController.doPrivileged(Native Method)*
> ***
>
>             at javax.security.auth.Subject.doAs(Subject.java:396)****
>
>             at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
> ****
>
>             at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)*
> ***
>
> ** **
>
> Any hint to fix this,****
>
> ** **
>
> ** **
>
> Regards****
>
> Bala****
>
> *From:* AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
> *Sent:* 06 March 2013 15:29
> *To:* user@hadoop.apache.org
> *Subject:* store file gives exception****
>
> ** **
>
> Now I came out of the safe mode through admin command. I tried to put a
> file into hdfs and encountered this error.****
>
>  ****
>
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /user/hadoop/hosts could only be replicated to 0 nodes, instead of 1****
>
>  ****
>
> Any hint to fix this,****
>
>  ****
>
> This happens when the namenode is not datanode. Am I making sense?****
>
>  ****
>
> With thanks and regards****
>
> Balachandar****
>
>  ****
>
>  ****
>
>  ****
>
> The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.****
>
> If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.****
>
> Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.****
>
> All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.****
>
> The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.****
>
> If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.****
>
> Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.****
>
> All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.****
>
> The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.
> If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.
> Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.
> All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.
>
>


-- 
Nitin Pawar

Re: store file gives exception

Posted by Nitin Pawar <ni...@gmail.com>.
in hadoop you don't have to worry about data locality. Hadoop job tracker
will by default try to schedule the job where the data is located in case
it has enough compute capacity. Also note that datanode just store the
blocks of file and multiple datanodes will have different blocks of the
file.


On Wed, Mar 6, 2013 at 5:52 PM, AMARNATH, Balachandar <
BALACHANDAR.AMARNATH@airbus.com> wrote:

> Hi all,****
>
> ** **
>
> I thought the below issue is coming because of non availability of enough
> space. Hence, I replaced the datanodes with other nodes with more space and
> it worked. ****
>
> ** **
>
> Now, I have a working HDFS cluster. I am thinking of my application where
> I need to execute ‘a set of similar instructions’  (job) over large number
> of files. I am planning to do this in parallel in different machines. I
> would like to schedule this job to the datanode that already has data input
> file in it. At first, I shall store the files in HDFS.  Now, to complete my
> task, Is there a scheduler available in hadoop framework that given the
> input file required for a job, can return the data node name where the file
> is actually stored?  Am I making sense here?****
>
> ** **
>
> Regards****
>
> Bala ****
>
> ** **
>
> ** **
>
> ** **
>
> *From:* AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
> *Sent:* 06 March 2013 16:49
> *To:* user@hadoop.apache.org
> *Subject:* RE: store file gives exception****
>
> ** **
>
> Hi, ****
>
> ** **
>
> I could successfully install hadoop cluster with three nodes (2 datanodes
> and 1 namenode). However, when I tried to store a file, I get the following
> error.****
>
> ** **
>
> 13/03/06 16:45:56 WARN hdfs.DFSClient: Error Recovery for block null bad
> datanode[0] nodes == null****
>
> 13/03/06 16:45:56 WARN hdfs.DFSClient: Could not get block locations.
> Source file "/user/bala/kumki/hosts" - Aborting...****
>
> put: java.io.IOException: File /user/bala/kumki/hosts could only be
> replicated to 0 nodes, instead of 1****
>
> 13/03/06 16:45:56 ERROR hdfs.DFSClient: Exception closing file
> /user/bala/kumki/hosts : org.apache.hadoop.ipc.RemoteException:
> java.io.IOException: File /user/bala/kumki/hosts could only be replicated
> to 0 nodes, instead of 1****
>
>             at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
> ****
>
>             at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
> ****
>
>             at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ****
>
>             at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> ****
>
>             at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> ****
>
>             at java.lang.reflect.Method.invoke(Method.java:597)****
>
>             at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)****
>
>             at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> ****
>
>             at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> ****
>
>             at java.security.AccessController.doPrivileged(Native Method)*
> ***
>
>             at javax.security.auth.Subject.doAs(Subject.java:396)****
>
>             at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
> ****
>
>             at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)*
> ***
>
> ** **
>
> Any hint to fix this,****
>
> ** **
>
> ** **
>
> Regards****
>
> Bala****
>
> *From:* AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
> *Sent:* 06 March 2013 15:29
> *To:* user@hadoop.apache.org
> *Subject:* store file gives exception****
>
> ** **
>
> Now I came out of the safe mode through admin command. I tried to put a
> file into hdfs and encountered this error.****
>
>  ****
>
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /user/hadoop/hosts could only be replicated to 0 nodes, instead of 1****
>
>  ****
>
> Any hint to fix this,****
>
>  ****
>
> This happens when the namenode is not datanode. Am I making sense?****
>
>  ****
>
> With thanks and regards****
>
> Balachandar****
>
>  ****
>
>  ****
>
>  ****
>
> The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.****
>
> If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.****
>
> Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.****
>
> All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.****
>
> The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.****
>
> If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.****
>
> Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.****
>
> All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.****
>
> The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.
> If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.
> Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.
> All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.
>
>


-- 
Nitin Pawar

Re: store file gives exception

Posted by Nitin Pawar <ni...@gmail.com>.
in hadoop you don't have to worry about data locality. Hadoop job tracker
will by default try to schedule the job where the data is located in case
it has enough compute capacity. Also note that datanode just store the
blocks of file and multiple datanodes will have different blocks of the
file.


On Wed, Mar 6, 2013 at 5:52 PM, AMARNATH, Balachandar <
BALACHANDAR.AMARNATH@airbus.com> wrote:

> Hi all,****
>
> ** **
>
> I thought the below issue is coming because of non availability of enough
> space. Hence, I replaced the datanodes with other nodes with more space and
> it worked. ****
>
> ** **
>
> Now, I have a working HDFS cluster. I am thinking of my application where
> I need to execute ‘a set of similar instructions’  (job) over large number
> of files. I am planning to do this in parallel in different machines. I
> would like to schedule this job to the datanode that already has data input
> file in it. At first, I shall store the files in HDFS.  Now, to complete my
> task, Is there a scheduler available in hadoop framework that given the
> input file required for a job, can return the data node name where the file
> is actually stored?  Am I making sense here?****
>
> ** **
>
> Regards****
>
> Bala ****
>
> ** **
>
> ** **
>
> ** **
>
> *From:* AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
> *Sent:* 06 March 2013 16:49
> *To:* user@hadoop.apache.org
> *Subject:* RE: store file gives exception****
>
> ** **
>
> Hi, ****
>
> ** **
>
> I could successfully install hadoop cluster with three nodes (2 datanodes
> and 1 namenode). However, when I tried to store a file, I get the following
> error.****
>
> ** **
>
> 13/03/06 16:45:56 WARN hdfs.DFSClient: Error Recovery for block null bad
> datanode[0] nodes == null****
>
> 13/03/06 16:45:56 WARN hdfs.DFSClient: Could not get block locations.
> Source file "/user/bala/kumki/hosts" - Aborting...****
>
> put: java.io.IOException: File /user/bala/kumki/hosts could only be
> replicated to 0 nodes, instead of 1****
>
> 13/03/06 16:45:56 ERROR hdfs.DFSClient: Exception closing file
> /user/bala/kumki/hosts : org.apache.hadoop.ipc.RemoteException:
> java.io.IOException: File /user/bala/kumki/hosts could only be replicated
> to 0 nodes, instead of 1****
>
>             at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
> ****
>
>             at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
> ****
>
>             at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ****
>
>             at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> ****
>
>             at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> ****
>
>             at java.lang.reflect.Method.invoke(Method.java:597)****
>
>             at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)****
>
>             at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> ****
>
>             at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> ****
>
>             at java.security.AccessController.doPrivileged(Native Method)*
> ***
>
>             at javax.security.auth.Subject.doAs(Subject.java:396)****
>
>             at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
> ****
>
>             at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)*
> ***
>
> ** **
>
> Any hint to fix this,****
>
> ** **
>
> ** **
>
> Regards****
>
> Bala****
>
> *From:* AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
> *Sent:* 06 March 2013 15:29
> *To:* user@hadoop.apache.org
> *Subject:* store file gives exception****
>
> ** **
>
> Now I came out of the safe mode through admin command. I tried to put a
> file into hdfs and encountered this error.****
>
>  ****
>
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /user/hadoop/hosts could only be replicated to 0 nodes, instead of 1****
>
>  ****
>
> Any hint to fix this,****
>
>  ****
>
> This happens when the namenode is not datanode. Am I making sense?****
>
>  ****
>
> With thanks and regards****
>
> Balachandar****
>
>  ****
>
>  ****
>
>  ****
>
> The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.****
>
> If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.****
>
> Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.****
>
> All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.****
>
> The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.****
>
> If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.****
>
> Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.****
>
> All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.****
>
> The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.
> If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.
> Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.
> All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.
>
>


-- 
Nitin Pawar

RE: store file gives exception

Posted by "AMARNATH, Balachandar" <BA...@airbus.com>.
Hi all,

I thought the below issue is coming because of non availability of enough space. Hence, I replaced the datanodes with other nodes with more space and it worked.

Now, I have a working HDFS cluster. I am thinking of my application where I need to execute 'a set of similar instructions'  (job) over large number of files. I am planning to do this in parallel in different machines. I would like to schedule this job to the datanode that already has data input file in it. At first, I shall store the files in HDFS.  Now, to complete my task, Is there a scheduler available in hadoop framework that given the input file required for a job, can return the data node name where the file is actually stored?  Am I making sense here?

Regards
Bala



From: AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
Sent: 06 March 2013 16:49
To: user@hadoop.apache.org
Subject: RE: store file gives exception

Hi,

I could successfully install hadoop cluster with three nodes (2 datanodes and 1 namenode). However, when I tried to store a file, I get the following error.

13/03/06 16:45:56 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
13/03/06 16:45:56 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/bala/kumki/hosts" - Aborting...
put: java.io.IOException: File /user/bala/kumki/hosts could only be replicated to 0 nodes, instead of 1
13/03/06 16:45:56 ERROR hdfs.DFSClient: Exception closing file /user/bala/kumki/hosts : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/bala/kumki/hosts could only be replicated to 0 nodes, instead of 1
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:597)
            at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:396)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
            at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

Any hint to fix this,


Regards
Bala
From: AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
Sent: 06 March 2013 15:29
To: user@hadoop.apache.org
Subject: store file gives exception

Now I came out of the safe mode through admin command. I tried to put a file into hdfs and encountered this error.

org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/hosts could only be replicated to 0 nodes, instead of 1

Any hint to fix this,

This happens when the namenode is not datanode. Am I making sense?

With thanks and regards
Balachandar




The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.

If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.

Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.

All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.

The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.

If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.

Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.

All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.

The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.
If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.
Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.
All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.


RE: store file gives exception

Posted by "AMARNATH, Balachandar" <BA...@airbus.com>.
Hi all,

I thought the below issue is coming because of non availability of enough space. Hence, I replaced the datanodes with other nodes with more space and it worked.

Now, I have a working HDFS cluster. I am thinking of my application where I need to execute 'a set of similar instructions'  (job) over large number of files. I am planning to do this in parallel in different machines. I would like to schedule this job to the datanode that already has data input file in it. At first, I shall store the files in HDFS.  Now, to complete my task, Is there a scheduler available in hadoop framework that given the input file required for a job, can return the data node name where the file is actually stored?  Am I making sense here?

Regards
Bala



From: AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
Sent: 06 March 2013 16:49
To: user@hadoop.apache.org
Subject: RE: store file gives exception

Hi,

I could successfully install hadoop cluster with three nodes (2 datanodes and 1 namenode). However, when I tried to store a file, I get the following error.

13/03/06 16:45:56 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
13/03/06 16:45:56 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/bala/kumki/hosts" - Aborting...
put: java.io.IOException: File /user/bala/kumki/hosts could only be replicated to 0 nodes, instead of 1
13/03/06 16:45:56 ERROR hdfs.DFSClient: Exception closing file /user/bala/kumki/hosts : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/bala/kumki/hosts could only be replicated to 0 nodes, instead of 1
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:597)
            at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:396)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
            at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

Any hint to fix this,


Regards
Bala
From: AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
Sent: 06 March 2013 15:29
To: user@hadoop.apache.org
Subject: store file gives exception

Now I came out of the safe mode through admin command. I tried to put a file into hdfs and encountered this error.

org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/hosts could only be replicated to 0 nodes, instead of 1

Any hint to fix this,

This happens when the namenode is not datanode. Am I making sense?

With thanks and regards
Balachandar




The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.

If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.

Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.

All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.

The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.

If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.

Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.

All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.

The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.
If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.
Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.
All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.


RE: store file gives exception

Posted by "AMARNATH, Balachandar" <BA...@airbus.com>.
Hi all,

I thought the below issue is coming because of non availability of enough space. Hence, I replaced the datanodes with other nodes with more space and it worked.

Now, I have a working HDFS cluster. I am thinking of my application where I need to execute 'a set of similar instructions'  (job) over large number of files. I am planning to do this in parallel in different machines. I would like to schedule this job to the datanode that already has data input file in it. At first, I shall store the files in HDFS.  Now, to complete my task, Is there a scheduler available in hadoop framework that given the input file required for a job, can return the data node name where the file is actually stored?  Am I making sense here?

Regards
Bala



From: AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
Sent: 06 March 2013 16:49
To: user@hadoop.apache.org
Subject: RE: store file gives exception

Hi,

I could successfully install hadoop cluster with three nodes (2 datanodes and 1 namenode). However, when I tried to store a file, I get the following error.

13/03/06 16:45:56 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
13/03/06 16:45:56 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/bala/kumki/hosts" - Aborting...
put: java.io.IOException: File /user/bala/kumki/hosts could only be replicated to 0 nodes, instead of 1
13/03/06 16:45:56 ERROR hdfs.DFSClient: Exception closing file /user/bala/kumki/hosts : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/bala/kumki/hosts could only be replicated to 0 nodes, instead of 1
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:597)
            at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:396)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
            at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

Any hint to fix this,


Regards
Bala
From: AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
Sent: 06 March 2013 15:29
To: user@hadoop.apache.org
Subject: store file gives exception

Now I came out of the safe mode through admin command. I tried to put a file into hdfs and encountered this error.

org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/hosts could only be replicated to 0 nodes, instead of 1

Any hint to fix this,

This happens when the namenode is not datanode. Am I making sense?

With thanks and regards
Balachandar




The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.

If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.

Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.

All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.

The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.

If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.

Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.

All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.

The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.
If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.
Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.
All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.


RE: store file gives exception

Posted by "AMARNATH, Balachandar" <BA...@airbus.com>.
Hi all,

I thought the below issue is coming because of non availability of enough space. Hence, I replaced the datanodes with other nodes with more space and it worked.

Now, I have a working HDFS cluster. I am thinking of my application where I need to execute 'a set of similar instructions'  (job) over large number of files. I am planning to do this in parallel in different machines. I would like to schedule this job to the datanode that already has data input file in it. At first, I shall store the files in HDFS.  Now, to complete my task, Is there a scheduler available in hadoop framework that given the input file required for a job, can return the data node name where the file is actually stored?  Am I making sense here?

Regards
Bala



From: AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
Sent: 06 March 2013 16:49
To: user@hadoop.apache.org
Subject: RE: store file gives exception

Hi,

I could successfully install hadoop cluster with three nodes (2 datanodes and 1 namenode). However, when I tried to store a file, I get the following error.

13/03/06 16:45:56 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
13/03/06 16:45:56 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/bala/kumki/hosts" - Aborting...
put: java.io.IOException: File /user/bala/kumki/hosts could only be replicated to 0 nodes, instead of 1
13/03/06 16:45:56 ERROR hdfs.DFSClient: Exception closing file /user/bala/kumki/hosts : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/bala/kumki/hosts could only be replicated to 0 nodes, instead of 1
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:597)
            at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:396)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
            at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

Any hint to fix this,


Regards
Bala
From: AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
Sent: 06 March 2013 15:29
To: user@hadoop.apache.org
Subject: store file gives exception

Now I came out of the safe mode through admin command. I tried to put a file into hdfs and encountered this error.

org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/hosts could only be replicated to 0 nodes, instead of 1

Any hint to fix this,

This happens when the namenode is not datanode. Am I making sense?

With thanks and regards
Balachandar




The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.

If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.

Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.

All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.

The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.

If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.

Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.

All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.

The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.
If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.
Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.
All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.


RE: store file gives exception

Posted by "AMARNATH, Balachandar" <BA...@airbus.com>.
Hi,

I could successfully install hadoop cluster with three nodes (2 datanodes and 1 namenode). However, when I tried to store a file, I get the following error.

13/03/06 16:45:56 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
13/03/06 16:45:56 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/bala/kumki/hosts" - Aborting...
put: java.io.IOException: File /user/bala/kumki/hosts could only be replicated to 0 nodes, instead of 1
13/03/06 16:45:56 ERROR hdfs.DFSClient: Exception closing file /user/bala/kumki/hosts : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/bala/kumki/hosts could only be replicated to 0 nodes, instead of 1
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:597)
            at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:396)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
            at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

Any hint to fix this,


Regards
Bala
From: AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
Sent: 06 March 2013 15:29
To: user@hadoop.apache.org
Subject: store file gives exception

Now I came out of the safe mode through admin command. I tried to put a file into hdfs and encountered this error.

org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/hosts could only be replicated to 0 nodes, instead of 1

Any hint to fix this,

This happens when the namenode is not datanode. Am I making sense?

With thanks and regards
Balachandar




The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.

If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.

Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.

All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.

The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.
If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.
Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.
All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.


RE: store file gives exception

Posted by "AMARNATH, Balachandar" <BA...@airbus.com>.
Hi,

I could successfully install hadoop cluster with three nodes (2 datanodes and 1 namenode). However, when I tried to store a file, I get the following error.

13/03/06 16:45:56 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
13/03/06 16:45:56 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/bala/kumki/hosts" - Aborting...
put: java.io.IOException: File /user/bala/kumki/hosts could only be replicated to 0 nodes, instead of 1
13/03/06 16:45:56 ERROR hdfs.DFSClient: Exception closing file /user/bala/kumki/hosts : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/bala/kumki/hosts could only be replicated to 0 nodes, instead of 1
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:597)
            at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:396)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
            at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

Any hint to fix this,


Regards
Bala
From: AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
Sent: 06 March 2013 15:29
To: user@hadoop.apache.org
Subject: store file gives exception

Now I came out of the safe mode through admin command. I tried to put a file into hdfs and encountered this error.

org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/hosts could only be replicated to 0 nodes, instead of 1

Any hint to fix this,

This happens when the namenode is not datanode. Am I making sense?

With thanks and regards
Balachandar




The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.

If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.

Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.

All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.

The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.
If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.
Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.
All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.


RE: store file gives exception

Posted by "AMARNATH, Balachandar" <BA...@airbus.com>.
Hi,

I could successfully install hadoop cluster with three nodes (2 datanodes and 1 namenode). However, when I tried to store a file, I get the following error.

13/03/06 16:45:56 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
13/03/06 16:45:56 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/bala/kumki/hosts" - Aborting...
put: java.io.IOException: File /user/bala/kumki/hosts could only be replicated to 0 nodes, instead of 1
13/03/06 16:45:56 ERROR hdfs.DFSClient: Exception closing file /user/bala/kumki/hosts : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/bala/kumki/hosts could only be replicated to 0 nodes, instead of 1
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:597)
            at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:396)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
            at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

Any hint to fix this,


Regards
Bala
From: AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
Sent: 06 March 2013 15:29
To: user@hadoop.apache.org
Subject: store file gives exception

Now I came out of the safe mode through admin command. I tried to put a file into hdfs and encountered this error.

org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/hosts could only be replicated to 0 nodes, instead of 1

Any hint to fix this,

This happens when the namenode is not datanode. Am I making sense?

With thanks and regards
Balachandar




The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.

If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.

Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.

All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.

The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.
If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.
Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.
All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.


RE: store file gives exception

Posted by "AMARNATH, Balachandar" <BA...@airbus.com>.
Hi,

I could successfully install hadoop cluster with three nodes (2 datanodes and 1 namenode). However, when I tried to store a file, I get the following error.

13/03/06 16:45:56 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
13/03/06 16:45:56 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/bala/kumki/hosts" - Aborting...
put: java.io.IOException: File /user/bala/kumki/hosts could only be replicated to 0 nodes, instead of 1
13/03/06 16:45:56 ERROR hdfs.DFSClient: Exception closing file /user/bala/kumki/hosts : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/bala/kumki/hosts could only be replicated to 0 nodes, instead of 1
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:597)
            at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:396)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
            at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

Any hint to fix this,


Regards
Bala
From: AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
Sent: 06 March 2013 15:29
To: user@hadoop.apache.org
Subject: store file gives exception

Now I came out of the safe mode through admin command. I tried to put a file into hdfs and encountered this error.

org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/hosts could only be replicated to 0 nodes, instead of 1

Any hint to fix this,

This happens when the namenode is not datanode. Am I making sense?

With thanks and regards
Balachandar




The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.

If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.

Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.

All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.

The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.
If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.
Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.
All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.