You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by John Martyniak <jo...@beforedawnsolutions.com> on 2009/11/10 17:32:11 UTC

java.io.IOException: Could not obtain block:

Hello everyone,

I am getting this error java.io.IOException: Could not obtain block:,  
when running on my new cluster.  When I ran the same job on the single  
node it worked perfectly, I then added in the second node, and receive  
this error.  I was running the grep sample job.

I am running Hadoop 0.19.2, because of a dependency on Nutch  
(Eventhough this was not a Nutch job).  I am not running HBase, the  
version of Java is OpenJDK 1.6.0.

Does anybody have any ideas?

Thanks in advance,

-John


Re: java.io.IOException: Could not obtain block:

Posted by John Martyniak <jo...@beforedawnsolutions.com>.
Edmund,

Thanks for the advice.  It turns out that it was the firewall running  
on the second cluster node.

So I stopped that and all is working correctly.  Now that I have the  
second node working the way that it is supposed to probably, going to  
bring another couple of nodes online.

Wish me luck:)

-John

On Nov 10, 2009, at 9:30 PM, Edmund Kohlwey wrote:

> I've not encountered an error like this, but here's some suggestions:
>
> 1. Try to make sure that your two node cluster is setup correctly.  
> Querying the web interface, using any of the included dfs utils (eg.  
> hadoop dfs -ls), or looking in your log directory may yield more  
> useful stack traces or errors.
>
> 2. Open up the source and check out the code around the stack trace.  
> This sucks, but hadoop is actually pretty easy to surf through in  
> Eclipse, and most classes are kept within a reasonable number of  
> lines of code and fairly readable.
>
> 3. Rip out the parts of Nutch you need and drop them in your  
> project, and forget about 0.19. This isn't ideal, but you have to  
> remember that this whole ecosystem is still forming and sometimes it  
> makes sense to rip stuff out and transplant it into your project  
> rather than depending on 2-3 classes from a project which you  
> otherwise don't use.
>
> On 11/10/09 11:32 AM, John Martyniak wrote:
>> Hello everyone,
>>
>> I am getting this error java.io.IOException: Could not obtain  
>> block:, when running on my new cluster.  When I ran the same job on  
>> the single node it worked perfectly, I then added in the second  
>> node, and receive this error.  I was running the grep sample job.
>>
>> I am running Hadoop 0.19.2, because of a dependency on Nutch  
>> (Eventhough this was not a Nutch job).  I am not running HBase, the  
>> version of Java is OpenJDK 1.6.0.
>>
>> Does anybody have any ideas?
>>
>> Thanks in advance,
>>
>> -John
>>
>


Re: java.io.IOException: Could not obtain block:

Posted by Edmund Kohlwey <ek...@gmail.com>.
I've not encountered an error like this, but here's some suggestions:

1. Try to make sure that your two node cluster is setup correctly. 
Querying the web interface, using any of the included dfs utils (eg. 
hadoop dfs -ls), or looking in your log directory may yield more useful 
stack traces or errors.

2. Open up the source and check out the code around the stack trace. 
This sucks, but hadoop is actually pretty easy to surf through in 
Eclipse, and most classes are kept within a reasonable number of lines 
of code and fairly readable.

3. Rip out the parts of Nutch you need and drop them in your project, 
and forget about 0.19. This isn't ideal, but you have to remember that 
this whole ecosystem is still forming and sometimes it makes sense to 
rip stuff out and transplant it into your project rather than depending 
on 2-3 classes from a project which you otherwise don't use.

On 11/10/09 11:32 AM, John Martyniak wrote:
> Hello everyone,
>
> I am getting this error java.io.IOException: Could not obtain block:, 
> when running on my new cluster.  When I ran the same job on the single 
> node it worked perfectly, I then added in the second node, and receive 
> this error.  I was running the grep sample job.
>
> I am running Hadoop 0.19.2, because of a dependency on Nutch 
> (Eventhough this was not a Nutch job).  I am not running HBase, the 
> version of Java is OpenJDK 1.6.0.
>
> Does anybody have any ideas?
>
> Thanks in advance,
>
> -John
>