You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Adnan Karač <ad...@gmail.com> on 2015/05/26 10:04:32 UTC

Cannot obtain block length for LocatedBlock

Hi all,

I have an MR job running and exiting with following exception.

java.io.IOException: Cannot obtain block length for LocatedBlock
{BP-1632531813-172.19.67.67-1393407344218:blk_1109280129_1099547327549;
getBlockSize()=139397; corrupt=false; offset=0; locs=[172.19.67.67:50010,
172.19.67.78:50010, 172.19.67.84:50010]}

Now, the fun part is that i don't know which file is in question. In order
to find this out, i did this:

*hdfs fsck -files -blocks  / | grep blk_1109280129_1099547327549*

Interestingly enough, it came up with nothing.

Did anyone experience anything similar? Or does anyone have a piece of
advice on how to resolve this?

Version of hadoop is 2.3.0

Thanks in advance!

-- 
Adnan Karač
ᐧ

Re: Cannot obtain block length for LocatedBlock

Posted by Liu Bo <di...@gmail.com>.
Hi adnan

I've met similar problem, the reducer output file length is zero and
missing some bytes at the end of the output file.
the cause is I use MultipleOutputs and forgot to close it at reducer
cleanup method.

hope it helps

On 26 May 2015 at 17:13, Adnan Karač <ad...@gmail.com> wrote:

> Hi Brahma,
>
> Thanks for the quick response. I assumed that running file check without
> *openforwrite* option would yield file in this block whether it was open
> for write and not. However, I have just tried it as well, unfortunately no
> success.
>
> Adnan
> ᐧ
>
> On Tue, May 26, 2015 at 10:12 AM, Brahma Reddy Battula <
> brahmareddy.battula@huawei.com> wrote:
>
>>
>> Can you try like following..?
>>
>> * hdfs fsck -openforwrite -files -blocks -locations / |
>> grep blk_1109280129_1099547327549*
>>
>>
>>  Thanks & Regards
>>
>>  Brahma Reddy Battula
>>
>>
>>    ------------------------------
>> *From:* Adnan Karač [adnankarac@gmail.com]
>> *Sent:* Tuesday, May 26, 2015 1:34 PM
>> *To:* user@hadoop.apache.org
>> *Subject:* Cannot obtain block length for LocatedBlock
>>
>>   Hi all,
>>
>>  I have an MR job running and exiting with following exception.
>>
>>  java.io.IOException: Cannot obtain block length for LocatedBlock
>> {BP-1632531813-172.19.67.67-1393407344218:blk_1109280129_1099547327549;
>> getBlockSize()=139397; corrupt=false; offset=0; locs=[172.19.67.67:50010,
>> 172.19.67.78:50010, 172.19.67.84:50010]}
>>
>>  Now, the fun part is that i don't know which file is in question. In
>> order to find this out, i did this:
>>
>>  *hdfs fsck -files -blocks  / | grep blk_1109280129_1099547327549*
>>
>>  Interestingly enough, it came up with nothing.
>>
>>  Did anyone experience anything similar? Or does anyone have a piece of
>> advice on how to resolve this?
>>
>>  Version of hadoop is 2.3.0
>>
>>  Thanks in advance!
>>
>>  --
>> Adnan Karač
>> ᐧ
>>
>
>
>
> --
> Adnan Karač
>



-- 
All the best

Liu Bo

Re: Cannot obtain block length for LocatedBlock

Posted by Liu Bo <di...@gmail.com>.
Hi adnan

I've met similar problem, the reducer output file length is zero and
missing some bytes at the end of the output file.
the cause is I use MultipleOutputs and forgot to close it at reducer
cleanup method.

hope it helps

On 26 May 2015 at 17:13, Adnan Karač <ad...@gmail.com> wrote:

> Hi Brahma,
>
> Thanks for the quick response. I assumed that running file check without
> *openforwrite* option would yield file in this block whether it was open
> for write and not. However, I have just tried it as well, unfortunately no
> success.
>
> Adnan
> ᐧ
>
> On Tue, May 26, 2015 at 10:12 AM, Brahma Reddy Battula <
> brahmareddy.battula@huawei.com> wrote:
>
>>
>> Can you try like following..?
>>
>> * hdfs fsck -openforwrite -files -blocks -locations / |
>> grep blk_1109280129_1099547327549*
>>
>>
>>  Thanks & Regards
>>
>>  Brahma Reddy Battula
>>
>>
>>    ------------------------------
>> *From:* Adnan Karač [adnankarac@gmail.com]
>> *Sent:* Tuesday, May 26, 2015 1:34 PM
>> *To:* user@hadoop.apache.org
>> *Subject:* Cannot obtain block length for LocatedBlock
>>
>>   Hi all,
>>
>>  I have an MR job running and exiting with following exception.
>>
>>  java.io.IOException: Cannot obtain block length for LocatedBlock
>> {BP-1632531813-172.19.67.67-1393407344218:blk_1109280129_1099547327549;
>> getBlockSize()=139397; corrupt=false; offset=0; locs=[172.19.67.67:50010,
>> 172.19.67.78:50010, 172.19.67.84:50010]}
>>
>>  Now, the fun part is that i don't know which file is in question. In
>> order to find this out, i did this:
>>
>>  *hdfs fsck -files -blocks  / | grep blk_1109280129_1099547327549*
>>
>>  Interestingly enough, it came up with nothing.
>>
>>  Did anyone experience anything similar? Or does anyone have a piece of
>> advice on how to resolve this?
>>
>>  Version of hadoop is 2.3.0
>>
>>  Thanks in advance!
>>
>>  --
>> Adnan Karač
>> ᐧ
>>
>
>
>
> --
> Adnan Karač
>



-- 
All the best

Liu Bo

Re: Cannot obtain block length for LocatedBlock

Posted by Liu Bo <di...@gmail.com>.
Hi adnan

I've met similar problem, the reducer output file length is zero and
missing some bytes at the end of the output file.
the cause is I use MultipleOutputs and forgot to close it at reducer
cleanup method.

hope it helps

On 26 May 2015 at 17:13, Adnan Karač <ad...@gmail.com> wrote:

> Hi Brahma,
>
> Thanks for the quick response. I assumed that running file check without
> *openforwrite* option would yield file in this block whether it was open
> for write and not. However, I have just tried it as well, unfortunately no
> success.
>
> Adnan
> ᐧ
>
> On Tue, May 26, 2015 at 10:12 AM, Brahma Reddy Battula <
> brahmareddy.battula@huawei.com> wrote:
>
>>
>> Can you try like following..?
>>
>> * hdfs fsck -openforwrite -files -blocks -locations / |
>> grep blk_1109280129_1099547327549*
>>
>>
>>  Thanks & Regards
>>
>>  Brahma Reddy Battula
>>
>>
>>    ------------------------------
>> *From:* Adnan Karač [adnankarac@gmail.com]
>> *Sent:* Tuesday, May 26, 2015 1:34 PM
>> *To:* user@hadoop.apache.org
>> *Subject:* Cannot obtain block length for LocatedBlock
>>
>>   Hi all,
>>
>>  I have an MR job running and exiting with following exception.
>>
>>  java.io.IOException: Cannot obtain block length for LocatedBlock
>> {BP-1632531813-172.19.67.67-1393407344218:blk_1109280129_1099547327549;
>> getBlockSize()=139397; corrupt=false; offset=0; locs=[172.19.67.67:50010,
>> 172.19.67.78:50010, 172.19.67.84:50010]}
>>
>>  Now, the fun part is that i don't know which file is in question. In
>> order to find this out, i did this:
>>
>>  *hdfs fsck -files -blocks  / | grep blk_1109280129_1099547327549*
>>
>>  Interestingly enough, it came up with nothing.
>>
>>  Did anyone experience anything similar? Or does anyone have a piece of
>> advice on how to resolve this?
>>
>>  Version of hadoop is 2.3.0
>>
>>  Thanks in advance!
>>
>>  --
>> Adnan Karač
>> ᐧ
>>
>
>
>
> --
> Adnan Karač
>



-- 
All the best

Liu Bo

Re: Cannot obtain block length for LocatedBlock

Posted by Liu Bo <di...@gmail.com>.
Hi adnan

I've met similar problem, the reducer output file length is zero and
missing some bytes at the end of the output file.
the cause is I use MultipleOutputs and forgot to close it at reducer
cleanup method.

hope it helps

On 26 May 2015 at 17:13, Adnan Karač <ad...@gmail.com> wrote:

> Hi Brahma,
>
> Thanks for the quick response. I assumed that running file check without
> *openforwrite* option would yield file in this block whether it was open
> for write and not. However, I have just tried it as well, unfortunately no
> success.
>
> Adnan
> ᐧ
>
> On Tue, May 26, 2015 at 10:12 AM, Brahma Reddy Battula <
> brahmareddy.battula@huawei.com> wrote:
>
>>
>> Can you try like following..?
>>
>> * hdfs fsck -openforwrite -files -blocks -locations / |
>> grep blk_1109280129_1099547327549*
>>
>>
>>  Thanks & Regards
>>
>>  Brahma Reddy Battula
>>
>>
>>    ------------------------------
>> *From:* Adnan Karač [adnankarac@gmail.com]
>> *Sent:* Tuesday, May 26, 2015 1:34 PM
>> *To:* user@hadoop.apache.org
>> *Subject:* Cannot obtain block length for LocatedBlock
>>
>>   Hi all,
>>
>>  I have an MR job running and exiting with following exception.
>>
>>  java.io.IOException: Cannot obtain block length for LocatedBlock
>> {BP-1632531813-172.19.67.67-1393407344218:blk_1109280129_1099547327549;
>> getBlockSize()=139397; corrupt=false; offset=0; locs=[172.19.67.67:50010,
>> 172.19.67.78:50010, 172.19.67.84:50010]}
>>
>>  Now, the fun part is that i don't know which file is in question. In
>> order to find this out, i did this:
>>
>>  *hdfs fsck -files -blocks  / | grep blk_1109280129_1099547327549*
>>
>>  Interestingly enough, it came up with nothing.
>>
>>  Did anyone experience anything similar? Or does anyone have a piece of
>> advice on how to resolve this?
>>
>>  Version of hadoop is 2.3.0
>>
>>  Thanks in advance!
>>
>>  --
>> Adnan Karač
>> ᐧ
>>
>
>
>
> --
> Adnan Karač
>



-- 
All the best

Liu Bo

Re: Cannot obtain block length for LocatedBlock

Posted by Adnan Karač <ad...@gmail.com>.
Hi Brahma,

Thanks for the quick response. I assumed that running file check without
*openforwrite* option would yield file in this block whether it was open
for write and not. However, I have just tried it as well, unfortunately no
success.

Adnan
ᐧ

On Tue, May 26, 2015 at 10:12 AM, Brahma Reddy Battula <
brahmareddy.battula@huawei.com> wrote:

>
> Can you try like following..?
>
> * hdfs fsck -openforwrite -files -blocks -locations / |
> grep blk_1109280129_1099547327549*
>
>
>  Thanks & Regards
>
>  Brahma Reddy Battula
>
>
>    ------------------------------
> *From:* Adnan Karač [adnankarac@gmail.com]
> *Sent:* Tuesday, May 26, 2015 1:34 PM
> *To:* user@hadoop.apache.org
> *Subject:* Cannot obtain block length for LocatedBlock
>
>   Hi all,
>
>  I have an MR job running and exiting with following exception.
>
>  java.io.IOException: Cannot obtain block length for LocatedBlock
> {BP-1632531813-172.19.67.67-1393407344218:blk_1109280129_1099547327549;
> getBlockSize()=139397; corrupt=false; offset=0; locs=[172.19.67.67:50010,
> 172.19.67.78:50010, 172.19.67.84:50010]}
>
>  Now, the fun part is that i don't know which file is in question. In
> order to find this out, i did this:
>
>  *hdfs fsck -files -blocks  / | grep blk_1109280129_1099547327549*
>
>  Interestingly enough, it came up with nothing.
>
>  Did anyone experience anything similar? Or does anyone have a piece of
> advice on how to resolve this?
>
>  Version of hadoop is 2.3.0
>
>  Thanks in advance!
>
>  --
> Adnan Karač
> ᐧ
>



-- 
Adnan Karač

Re: Cannot obtain block length for LocatedBlock

Posted by Adnan Karač <ad...@gmail.com>.
Hi Brahma,

Thanks for the quick response. I assumed that running file check without
*openforwrite* option would yield file in this block whether it was open
for write and not. However, I have just tried it as well, unfortunately no
success.

Adnan
ᐧ

On Tue, May 26, 2015 at 10:12 AM, Brahma Reddy Battula <
brahmareddy.battula@huawei.com> wrote:

>
> Can you try like following..?
>
> * hdfs fsck -openforwrite -files -blocks -locations / |
> grep blk_1109280129_1099547327549*
>
>
>  Thanks & Regards
>
>  Brahma Reddy Battula
>
>
>    ------------------------------
> *From:* Adnan Karač [adnankarac@gmail.com]
> *Sent:* Tuesday, May 26, 2015 1:34 PM
> *To:* user@hadoop.apache.org
> *Subject:* Cannot obtain block length for LocatedBlock
>
>   Hi all,
>
>  I have an MR job running and exiting with following exception.
>
>  java.io.IOException: Cannot obtain block length for LocatedBlock
> {BP-1632531813-172.19.67.67-1393407344218:blk_1109280129_1099547327549;
> getBlockSize()=139397; corrupt=false; offset=0; locs=[172.19.67.67:50010,
> 172.19.67.78:50010, 172.19.67.84:50010]}
>
>  Now, the fun part is that i don't know which file is in question. In
> order to find this out, i did this:
>
>  *hdfs fsck -files -blocks  / | grep blk_1109280129_1099547327549*
>
>  Interestingly enough, it came up with nothing.
>
>  Did anyone experience anything similar? Or does anyone have a piece of
> advice on how to resolve this?
>
>  Version of hadoop is 2.3.0
>
>  Thanks in advance!
>
>  --
> Adnan Karač
> ᐧ
>



-- 
Adnan Karač

Re: Cannot obtain block length for LocatedBlock

Posted by Adnan Karač <ad...@gmail.com>.
Hi Brahma,

Thanks for the quick response. I assumed that running file check without
*openforwrite* option would yield file in this block whether it was open
for write and not. However, I have just tried it as well, unfortunately no
success.

Adnan
ᐧ

On Tue, May 26, 2015 at 10:12 AM, Brahma Reddy Battula <
brahmareddy.battula@huawei.com> wrote:

>
> Can you try like following..?
>
> * hdfs fsck -openforwrite -files -blocks -locations / |
> grep blk_1109280129_1099547327549*
>
>
>  Thanks & Regards
>
>  Brahma Reddy Battula
>
>
>    ------------------------------
> *From:* Adnan Karač [adnankarac@gmail.com]
> *Sent:* Tuesday, May 26, 2015 1:34 PM
> *To:* user@hadoop.apache.org
> *Subject:* Cannot obtain block length for LocatedBlock
>
>   Hi all,
>
>  I have an MR job running and exiting with following exception.
>
>  java.io.IOException: Cannot obtain block length for LocatedBlock
> {BP-1632531813-172.19.67.67-1393407344218:blk_1109280129_1099547327549;
> getBlockSize()=139397; corrupt=false; offset=0; locs=[172.19.67.67:50010,
> 172.19.67.78:50010, 172.19.67.84:50010]}
>
>  Now, the fun part is that i don't know which file is in question. In
> order to find this out, i did this:
>
>  *hdfs fsck -files -blocks  / | grep blk_1109280129_1099547327549*
>
>  Interestingly enough, it came up with nothing.
>
>  Did anyone experience anything similar? Or does anyone have a piece of
> advice on how to resolve this?
>
>  Version of hadoop is 2.3.0
>
>  Thanks in advance!
>
>  --
> Adnan Karač
> ᐧ
>



-- 
Adnan Karač

Re: Cannot obtain block length for LocatedBlock

Posted by Adnan Karač <ad...@gmail.com>.
Hi Brahma,

Thanks for the quick response. I assumed that running file check without
*openforwrite* option would yield file in this block whether it was open
for write and not. However, I have just tried it as well, unfortunately no
success.

Adnan
ᐧ

On Tue, May 26, 2015 at 10:12 AM, Brahma Reddy Battula <
brahmareddy.battula@huawei.com> wrote:

>
> Can you try like following..?
>
> * hdfs fsck -openforwrite -files -blocks -locations / |
> grep blk_1109280129_1099547327549*
>
>
>  Thanks & Regards
>
>  Brahma Reddy Battula
>
>
>    ------------------------------
> *From:* Adnan Karač [adnankarac@gmail.com]
> *Sent:* Tuesday, May 26, 2015 1:34 PM
> *To:* user@hadoop.apache.org
> *Subject:* Cannot obtain block length for LocatedBlock
>
>   Hi all,
>
>  I have an MR job running and exiting with following exception.
>
>  java.io.IOException: Cannot obtain block length for LocatedBlock
> {BP-1632531813-172.19.67.67-1393407344218:blk_1109280129_1099547327549;
> getBlockSize()=139397; corrupt=false; offset=0; locs=[172.19.67.67:50010,
> 172.19.67.78:50010, 172.19.67.84:50010]}
>
>  Now, the fun part is that i don't know which file is in question. In
> order to find this out, i did this:
>
>  *hdfs fsck -files -blocks  / | grep blk_1109280129_1099547327549*
>
>  Interestingly enough, it came up with nothing.
>
>  Did anyone experience anything similar? Or does anyone have a piece of
> advice on how to resolve this?
>
>  Version of hadoop is 2.3.0
>
>  Thanks in advance!
>
>  --
> Adnan Karač
> ᐧ
>



-- 
Adnan Karač

RE: Cannot obtain block length for LocatedBlock

Posted by Brahma Reddy Battula <br...@huawei.com>.
Can you try like following..?

hdfs fsck -openforwrite -files -blocks -locations / | grep blk_1109280129_1099547327549



Thanks & Regards

 Brahma Reddy Battula


________________________________
From: Adnan Karač [adnankarac@gmail.com]
Sent: Tuesday, May 26, 2015 1:34 PM
To: user@hadoop.apache.org
Subject: Cannot obtain block length for LocatedBlock

Hi all,

I have an MR job running and exiting with following exception.

java.io.IOException: Cannot obtain block length for LocatedBlock
{BP-1632531813-172.19.67.67-1393407344218:blk_1109280129_1099547327549; getBlockSize()=139397; corrupt=false; offset=0; locs=[172.19.67.67:50010<http://172.19.67.67:50010>, 172.19.67.78:50010<http://172.19.67.78:50010>, 172.19.67.84:50010<http://172.19.67.84:50010>]}

Now, the fun part is that i don't know which file is in question. In order to find this out, i did this:

hdfs fsck -files -blocks  / | grep blk_1109280129_1099547327549

Interestingly enough, it came up with nothing.

Did anyone experience anything similar? Or does anyone have a piece of advice on how to resolve this?

Version of hadoop is 2.3.0

Thanks in advance!

--
Adnan Karač
[https://mailfoogae.appspot.com/t?sender=aYWRuYW5rYXJhY0BnbWFpbC5jb20%3D&type=zerocontent&guid=316827dc-8cb2-45d7-a776-5c8b1d11bc17]ᐧ

RE: Cannot obtain block length for LocatedBlock

Posted by Brahma Reddy Battula <br...@huawei.com>.
Can you try like following..?

hdfs fsck -openforwrite -files -blocks -locations / | grep blk_1109280129_1099547327549



Thanks & Regards

 Brahma Reddy Battula


________________________________
From: Adnan Karač [adnankarac@gmail.com]
Sent: Tuesday, May 26, 2015 1:34 PM
To: user@hadoop.apache.org
Subject: Cannot obtain block length for LocatedBlock

Hi all,

I have an MR job running and exiting with following exception.

java.io.IOException: Cannot obtain block length for LocatedBlock
{BP-1632531813-172.19.67.67-1393407344218:blk_1109280129_1099547327549; getBlockSize()=139397; corrupt=false; offset=0; locs=[172.19.67.67:50010<http://172.19.67.67:50010>, 172.19.67.78:50010<http://172.19.67.78:50010>, 172.19.67.84:50010<http://172.19.67.84:50010>]}

Now, the fun part is that i don't know which file is in question. In order to find this out, i did this:

hdfs fsck -files -blocks  / | grep blk_1109280129_1099547327549

Interestingly enough, it came up with nothing.

Did anyone experience anything similar? Or does anyone have a piece of advice on how to resolve this?

Version of hadoop is 2.3.0

Thanks in advance!

--
Adnan Karač
[https://mailfoogae.appspot.com/t?sender=aYWRuYW5rYXJhY0BnbWFpbC5jb20%3D&type=zerocontent&guid=316827dc-8cb2-45d7-a776-5c8b1d11bc17]ᐧ

RE: Cannot obtain block length for LocatedBlock

Posted by Brahma Reddy Battula <br...@huawei.com>.
Can you try like following..?

hdfs fsck -openforwrite -files -blocks -locations / | grep blk_1109280129_1099547327549



Thanks & Regards

 Brahma Reddy Battula


________________________________
From: Adnan Karač [adnankarac@gmail.com]
Sent: Tuesday, May 26, 2015 1:34 PM
To: user@hadoop.apache.org
Subject: Cannot obtain block length for LocatedBlock

Hi all,

I have an MR job running and exiting with following exception.

java.io.IOException: Cannot obtain block length for LocatedBlock
{BP-1632531813-172.19.67.67-1393407344218:blk_1109280129_1099547327549; getBlockSize()=139397; corrupt=false; offset=0; locs=[172.19.67.67:50010<http://172.19.67.67:50010>, 172.19.67.78:50010<http://172.19.67.78:50010>, 172.19.67.84:50010<http://172.19.67.84:50010>]}

Now, the fun part is that i don't know which file is in question. In order to find this out, i did this:

hdfs fsck -files -blocks  / | grep blk_1109280129_1099547327549

Interestingly enough, it came up with nothing.

Did anyone experience anything similar? Or does anyone have a piece of advice on how to resolve this?

Version of hadoop is 2.3.0

Thanks in advance!

--
Adnan Karač
[https://mailfoogae.appspot.com/t?sender=aYWRuYW5rYXJhY0BnbWFpbC5jb20%3D&type=zerocontent&guid=316827dc-8cb2-45d7-a776-5c8b1d11bc17]ᐧ

RE: Cannot obtain block length for LocatedBlock

Posted by Brahma Reddy Battula <br...@huawei.com>.
Can you try like following..?

hdfs fsck -openforwrite -files -blocks -locations / | grep blk_1109280129_1099547327549



Thanks & Regards

 Brahma Reddy Battula


________________________________
From: Adnan Karač [adnankarac@gmail.com]
Sent: Tuesday, May 26, 2015 1:34 PM
To: user@hadoop.apache.org
Subject: Cannot obtain block length for LocatedBlock

Hi all,

I have an MR job running and exiting with following exception.

java.io.IOException: Cannot obtain block length for LocatedBlock
{BP-1632531813-172.19.67.67-1393407344218:blk_1109280129_1099547327549; getBlockSize()=139397; corrupt=false; offset=0; locs=[172.19.67.67:50010<http://172.19.67.67:50010>, 172.19.67.78:50010<http://172.19.67.78:50010>, 172.19.67.84:50010<http://172.19.67.84:50010>]}

Now, the fun part is that i don't know which file is in question. In order to find this out, i did this:

hdfs fsck -files -blocks  / | grep blk_1109280129_1099547327549

Interestingly enough, it came up with nothing.

Did anyone experience anything similar? Or does anyone have a piece of advice on how to resolve this?

Version of hadoop is 2.3.0

Thanks in advance!

--
Adnan Karač
[https://mailfoogae.appspot.com/t?sender=aYWRuYW5rYXJhY0BnbWFpbC5jb20%3D&type=zerocontent&guid=316827dc-8cb2-45d7-a776-5c8b1d11bc17]ᐧ