You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Jack Levin <ma...@gmail.com> on 2011/11/10 04:24:12 UTC

errors after upgrade

Hey guys, I am getting those errors after moving into 0.90.4:

2011-11-09 19:22:51,220 ERROR
org.apache.hadoop.hbase.io.HbaseObjectWritable: Error in readFields
java.io.EOFException
	at java.io.DataInputStream.readInt(DataInputStream.java:375)
	at org.apache.hadoop.hbase.client.Get.readFields(Get.java:377)
	at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invocation.readFields(HBaseRPC.java:127)
	at org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:978)
	at org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:946)
	at org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:522)
	at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:316)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
	at java.lang.Thread.run(Thread.java:662)
2011-11-09 19:22:51,220 WARN org.apache.hadoop.ipc.HBaseServer: IPC
Server listener on 60020: readAndProcess threw exception
java.io.IOException: Error in readFields. Count of bytes read: 0
java.io.IOException: Error in readFields
	at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:524)
	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invocation.readFields(HBaseRPC.java:127)
	at org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:978)
	at org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:946)
	at org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:522)
	at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:316)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
	at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.EOFException
	at java.io.DataInputStream.readInt(DataInputStream.java:375)
	at org.apache.hadoop.hbase.client.Get.readFields(Get.java:377)
	at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
	... 8 more


I would be really said if this was the case of reading a row and
getting zero bytes.  Perhaps its an exception for a query when a row
does not exist?

-Jack

Re: errors after upgrade

Posted by Jack Levin <ma...@gmail.com>.
Anyone seen this before?  We continue to have this on several of our clusters.

Thanks.

-Jack

On Wed, Nov 9, 2011 at 7:24 PM, Jack Levin <ma...@gmail.com> wrote:
> Hey guys, I am getting those errors after moving into 0.90.4:
>
> 2011-11-09 19:22:51,220 ERROR
> org.apache.hadoop.hbase.io.HbaseObjectWritable: Error in readFields
> java.io.EOFException
>        at java.io.DataInputStream.readInt(DataInputStream.java:375)
>        at org.apache.hadoop.hbase.client.Get.readFields(Get.java:377)
>        at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
>        at org.apache.hadoop.hbase.ipc.HBaseRPC$Invocation.readFields(HBaseRPC.java:127)
>        at org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:978)
>        at org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:946)
>        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:522)
>        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:316)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>        at java.lang.Thread.run(Thread.java:662)
> 2011-11-09 19:22:51,220 WARN org.apache.hadoop.ipc.HBaseServer: IPC
> Server listener on 60020: readAndProcess threw exception
> java.io.IOException: Error in readFields. Count of bytes read: 0
> java.io.IOException: Error in readFields
>        at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:524)
>        at org.apache.hadoop.hbase.ipc.HBaseRPC$Invocation.readFields(HBaseRPC.java:127)
>        at org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:978)
>        at org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:946)
>        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:522)
>        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:316)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>        at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.EOFException
>        at java.io.DataInputStream.readInt(DataInputStream.java:375)
>        at org.apache.hadoop.hbase.client.Get.readFields(Get.java:377)
>        at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
>        ... 8 more
>
>
> I would be really said if this was the case of reading a row and
> getting zero bytes.  Perhaps its an exception for a query when a row
> does not exist?
>
> -Jack
>

Re: errors after upgrade

Posted by Ted Yu <yu...@gmail.com>.
Get.java appeared in jstack.
HBASE-3919 modified that file in 0.90.4 but the changes didn't seem to be
related to serialization.

On Mon, Nov 14, 2011 at 3:10 PM, Stack <st...@duboce.net> wrote:

> On Mon, Nov 14, 2011 at 3:05 PM, Jack Levin <ma...@gmail.com> wrote:
> > No custom code. and I did enable RPC logging to see what might be
> > wrong, but nothing is showing that would be considered an error.  Its
> > happening on two of our clusters that run different functions, one
> > uses thrift, and the other is REST.  From the looks of the stack trace
> > it seems like a low level java error.   One of our clusters runs scans
> > without filters, and the other is just PUT and GET, both get errors.
> > We can add some more debug code into the source and try if you can
> > suggest of a way to produce more debugging info.
> >
>
> What about other side of the connection? Is client going away on us?
> Maybe these are slow queries and client has given up by the time the
> server gets around to processing the request?
>
> St.Ack
>

Re: errors after upgrade

Posted by Jack Levin <ma...@gmail.com>.
Nope, there are no timeouts, the queries are fast and 95% in cache,
this looks like a region server tried to read some memory buffer and
get 0 bytes in return.

-Jack

On Mon, Nov 14, 2011 at 3:10 PM, Stack <st...@duboce.net> wrote:
> On Mon, Nov 14, 2011 at 3:05 PM, Jack Levin <ma...@gmail.com> wrote:
>> No custom code. and I did enable RPC logging to see what might be
>> wrong, but nothing is showing that would be considered an error.  Its
>> happening on two of our clusters that run different functions, one
>> uses thrift, and the other is REST.  From the looks of the stack trace
>> it seems like a low level java error.   One of our clusters runs scans
>> without filters, and the other is just PUT and GET, both get errors.
>> We can add some more debug code into the source and try if you can
>> suggest of a way to produce more debugging info.
>>
>
> What about other side of the connection? Is client going away on us?
> Maybe these are slow queries and client has given up by the time the
> server gets around to processing the request?
>
> St.Ack
>

Re: errors after upgrade

Posted by Stack <st...@duboce.net>.
On Mon, Nov 14, 2011 at 3:05 PM, Jack Levin <ma...@gmail.com> wrote:
> No custom code. and I did enable RPC logging to see what might be
> wrong, but nothing is showing that would be considered an error.  Its
> happening on two of our clusters that run different functions, one
> uses thrift, and the other is REST.  From the looks of the stack trace
> it seems like a low level java error.   One of our clusters runs scans
> without filters, and the other is just PUT and GET, both get errors.
> We can add some more debug code into the source and try if you can
> suggest of a way to produce more debugging info.
>

What about other side of the connection? Is client going away on us?
Maybe these are slow queries and client has given up by the time the
server gets around to processing the request?

St.Ack

Re: errors after upgrade

Posted by Jack Levin <ma...@gmail.com>.
No custom code. and I did enable RPC logging to see what might be
wrong, but nothing is showing that would be considered an error.  Its
happening on two of our clusters that run different functions, one
uses thrift, and the other is REST.  From the looks of the stack trace
it seems like a low level java error.   One of our clusters runs scans
without filters, and the other is just PUT and GET, both get errors.
We can add some more debug code into the source and try if you can
suggest of a way to produce more debugging info.

Thanks.

-jack

On Mon, Nov 14, 2011 at 2:27 PM, Stack <st...@duboce.net> wrote:
> On Wed, Nov 9, 2011 at 7:24 PM, Jack Levin <ma...@gmail.com> wrote:
>> Hey guys, I am getting those errors after moving into 0.90.4:
>>
>
> You have custom code on the server-side Jack?  A filter or something?
>
> You could turn on rpc logging.  It could give you more clues on what
> is messing up.  You could turn it on on a single node in the UI w/o
> having to restart a node; see the 'Log Level' servlet... its along the
> top of the UI.  Set the class
> log4j.logger.org.apache.hadoop.ipc.HBaseServer to DEBUG level.  It'll
> spew a bunch of logs and hopefully you can see whats off.  You can
> disable it again similarly.
>
> St.Ack
>
>> 2011-11-09 19:22:51,220 ERROR
>> org.apache.hadoop.hbase.io.HbaseObjectWritable: Error in readFields
>> java.io.EOFException
>>        at java.io.DataInputStream.readInt(DataInputStream.java:375)
>>        at org.apache.hadoop.hbase.client.Get.readFields(Get.java:377)
>>        at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
>>        at org.apache.hadoop.hbase.ipc.HBaseRPC$Invocation.readFields(HBaseRPC.java:127)
>>        at org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:978)
>>        at org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:946)
>>        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:522)
>>        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:316)
>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>        at java.lang.Thread.run(Thread.java:662)
>> 2011-11-09 19:22:51,220 WARN org.apache.hadoop.ipc.HBaseServer: IPC
>> Server listener on 60020: readAndProcess threw exception
>> java.io.IOException: Error in readFields. Count of bytes read: 0
>> java.io.IOException: Error in readFields
>>        at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:524)
>>        at org.apache.hadoop.hbase.ipc.HBaseRPC$Invocation.readFields(HBaseRPC.java:127)
>>        at org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:978)
>>        at org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:946)
>>        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:522)
>>        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:316)
>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>        at java.lang.Thread.run(Thread.java:662)
>> Caused by: java.io.EOFException
>>        at java.io.DataInputStream.readInt(DataInputStream.java:375)
>>        at org.apache.hadoop.hbase.client.Get.readFields(Get.java:377)
>>        at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
>>        ... 8 more
>>
>>
>> I would be really said if this was the case of reading a row and
>> getting zero bytes.  Perhaps its an exception for a query when a row
>> does not exist?
>>
>> -Jack
>>
>

Re: errors after upgrade

Posted by Stack <st...@duboce.net>.
On Wed, Nov 9, 2011 at 7:24 PM, Jack Levin <ma...@gmail.com> wrote:
> Hey guys, I am getting those errors after moving into 0.90.4:
>

You have custom code on the server-side Jack?  A filter or something?

You could turn on rpc logging.  It could give you more clues on what
is messing up.  You could turn it on on a single node in the UI w/o
having to restart a node; see the 'Log Level' servlet... its along the
top of the UI.  Set the class
log4j.logger.org.apache.hadoop.ipc.HBaseServer to DEBUG level.  It'll
spew a bunch of logs and hopefully you can see whats off.  You can
disable it again similarly.

St.Ack

> 2011-11-09 19:22:51,220 ERROR
> org.apache.hadoop.hbase.io.HbaseObjectWritable: Error in readFields
> java.io.EOFException
>        at java.io.DataInputStream.readInt(DataInputStream.java:375)
>        at org.apache.hadoop.hbase.client.Get.readFields(Get.java:377)
>        at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
>        at org.apache.hadoop.hbase.ipc.HBaseRPC$Invocation.readFields(HBaseRPC.java:127)
>        at org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:978)
>        at org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:946)
>        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:522)
>        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:316)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>        at java.lang.Thread.run(Thread.java:662)
> 2011-11-09 19:22:51,220 WARN org.apache.hadoop.ipc.HBaseServer: IPC
> Server listener on 60020: readAndProcess threw exception
> java.io.IOException: Error in readFields. Count of bytes read: 0
> java.io.IOException: Error in readFields
>        at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:524)
>        at org.apache.hadoop.hbase.ipc.HBaseRPC$Invocation.readFields(HBaseRPC.java:127)
>        at org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:978)
>        at org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:946)
>        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:522)
>        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:316)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>        at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.EOFException
>        at java.io.DataInputStream.readInt(DataInputStream.java:375)
>        at org.apache.hadoop.hbase.client.Get.readFields(Get.java:377)
>        at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
>        ... 8 more
>
>
> I would be really said if this was the case of reading a row and
> getting zero bytes.  Perhaps its an exception for a query when a row
> does not exist?
>
> -Jack
>