You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by QiXiangming <qi...@hotmail.com> on 2014/09/17 04:29:01 UTC

hbase use error

hello ,everyone        i use hbase to store small pic or files , and meet an exception raised from hdfs, as following :
 slave2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.246:33162 dest: /192.168.20.247:50010
java.io.IOException: Premature EOF from inputStream
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
	at java.lang.Thread.run(Thread.java:745)

when hbase stores pics or file  under 200k, it works well, but if you load file larger than 20M , hbase definitely down!
what's wrong with it ? can anyone help use? 

URGENT!!!

Qi Xiangming 		 	   		  

RE: hbase use error

Posted by mike Zarrin <mi...@unitedrmr.com>.
Please unsubscribe me

 

From: Ted Yu [mailto:yuzhihong@gmail.com] 
Sent: Tuesday, September 16, 2014 7:34 PM
To: common-user@hadoop.apache.org
Cc: common-dev@hadoop.apache.org
Subject: Re: hbase use error

 

Which hadoop release are you using ?

Can you pastebin more of the server logs ?

 

bq. load file larger than 20M

 

Do you store such file(s) directly on hdfs and put its path in hbase ?

See HBASE-11339 HBase MOB

 

On Tue, Sep 16, 2014 at 7:29 PM, QiXiangming <qi...@hotmail.com> wrote:

hello ,everyone

        i use hbase to store small pic or files , and meet an exception raised from hdfs, as following :

 

 slave2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.246:33162 dest: /192.168.20.247:50010
java.io.IOException: Premature EOF from inputStream
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
at java.lang.Thread.run(Thread.java:745)

 

when hbase stores pics or file  under 200k, it works well, 

but if you load file larger than 20M , hbase definitely down!

 

what's wrong with it ? 

can anyone help use? 

 

 

URGENT!!!

 

 

Qi Xiangming

 


RE: hbase use error

Posted by QiXiangming <qi...@hotmail.com>.




Dear Yu, this is a snippet of log , thank you for diagnosis!
it seems the upstream socket cracks. but i can't find valuable clue from other datanode's log!
 
------------------------------------------------------------------------------------------------------
 
2014-09-17 11:02:30,058 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1977874513-192.168.20.245-1397637902470:blk_1073991434_253116
2014-09-17 11:02:32,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:32,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861, type=HAS_DOWNSTREAM_IN_PIPELINE: Thread is interrupted.
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863, type=HAS_DOWNSTREAM_IN_PIPELINE: Thread is interrupted.
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2014-09-17 11:02:32,362 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861 received exception java.io.IOException: Premature EOF from inputStream
2014-09-17 11:02:32,362 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863 received exception java.io.IOException: Premature EOF from inputStream
2014-09-17 11:02:32,362 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: slave3:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.248:55292 dest: /192.168.20.248:50010
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:32,362 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: slave3:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.248:55304 dest: /192.168.20.248:50010
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:33,957 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:33,958 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860, type=LAST_IN_PIPELINE, downstreams=0:[]: Thread is interrupted.
2014-09-17 11:02:33,958 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-09-17 11:02:33,958 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860 received exception java.io.IOException: Premature EOF from inputStream
2014-09-17 11:02:33,958 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: slave3:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.246:45823 dest: /192.168.20.248:50010

 
Date: Tue, 16 Sep 2014 19:50:47 -0700
Subject: Re: hbase use error
From: yuzhihong@gmail.com
To: user@hadoop.apache.org

Please pastebin the log instead of sending to me.
See https://issues.apache.org/jira/browse/HBASE-11339
It is under development.
On Tue, Sep 16, 2014 at 7:47 PM, QiXiangming <qi...@hotmail.com> wrote:



thank you for your reply so quickly.we cdh 5.store files directly in hbase , not path.
i have read some hbase schema design , says that it is recommended that for large file store path in hbase , and put real content in hdfs sequencefile. but i think 20M is not to big.
i download those log now , and send you later.
where can i find  HBASE-11339 HBase MOB?

Date: Tue, 16 Sep 2014 19:34:20 -0700
Subject: Re: hbase use error
From: yuzhihong@gmail.com
To: user@hadoop.apache.org
CC: common-dev@hadoop.apache.org

Which hadoop release are you using ?Can you pastebin more of the server logs ?
bq. load file larger than 20M
Do you store such file(s) directly on hdfs and put its path in hbase ?See HBASE-11339 HBase MOB
On Tue, Sep 16, 2014 at 7:29 PM, QiXiangming <qi...@hotmail.com> wrote:



hello ,everyone        i use hbase to store small pic or files , and meet an exception raised from hdfs, as following :
 slave2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.246:33162 dest: /192.168.20.247:50010
java.io.IOException: Premature EOF from inputStream
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
	at java.lang.Thread.run(Thread.java:745)

when hbase stores pics or file  under 200k, it works well, but if you load file larger than 20M , hbase definitely down!
what's wrong with it ? can anyone help use? 

URGENT!!!

Qi Xiangming 		 	   		  

 		 	   		  

 		 	   		   		 	   		  

RE: hbase use error

Posted by QiXiangming <qi...@hotmail.com>.




Dear Yu, this is a snippet of log , thank you for diagnosis!
it seems the upstream socket cracks. but i can't find valuable clue from other datanode's log!
 
------------------------------------------------------------------------------------------------------
 
2014-09-17 11:02:30,058 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1977874513-192.168.20.245-1397637902470:blk_1073991434_253116
2014-09-17 11:02:32,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:32,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861, type=HAS_DOWNSTREAM_IN_PIPELINE: Thread is interrupted.
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863, type=HAS_DOWNSTREAM_IN_PIPELINE: Thread is interrupted.
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2014-09-17 11:02:32,362 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861 received exception java.io.IOException: Premature EOF from inputStream
2014-09-17 11:02:32,362 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863 received exception java.io.IOException: Premature EOF from inputStream
2014-09-17 11:02:32,362 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: slave3:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.248:55292 dest: /192.168.20.248:50010
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:32,362 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: slave3:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.248:55304 dest: /192.168.20.248:50010
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:33,957 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:33,958 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860, type=LAST_IN_PIPELINE, downstreams=0:[]: Thread is interrupted.
2014-09-17 11:02:33,958 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-09-17 11:02:33,958 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860 received exception java.io.IOException: Premature EOF from inputStream
2014-09-17 11:02:33,958 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: slave3:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.246:45823 dest: /192.168.20.248:50010

 
Date: Tue, 16 Sep 2014 19:50:47 -0700
Subject: Re: hbase use error
From: yuzhihong@gmail.com
To: user@hadoop.apache.org

Please pastebin the log instead of sending to me.
See https://issues.apache.org/jira/browse/HBASE-11339
It is under development.
On Tue, Sep 16, 2014 at 7:47 PM, QiXiangming <qi...@hotmail.com> wrote:



thank you for your reply so quickly.we cdh 5.store files directly in hbase , not path.
i have read some hbase schema design , says that it is recommended that for large file store path in hbase , and put real content in hdfs sequencefile. but i think 20M is not to big.
i download those log now , and send you later.
where can i find  HBASE-11339 HBase MOB?

Date: Tue, 16 Sep 2014 19:34:20 -0700
Subject: Re: hbase use error
From: yuzhihong@gmail.com
To: user@hadoop.apache.org
CC: common-dev@hadoop.apache.org

Which hadoop release are you using ?Can you pastebin more of the server logs ?
bq. load file larger than 20M
Do you store such file(s) directly on hdfs and put its path in hbase ?See HBASE-11339 HBase MOB
On Tue, Sep 16, 2014 at 7:29 PM, QiXiangming <qi...@hotmail.com> wrote:



hello ,everyone        i use hbase to store small pic or files , and meet an exception raised from hdfs, as following :
 slave2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.246:33162 dest: /192.168.20.247:50010
java.io.IOException: Premature EOF from inputStream
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
	at java.lang.Thread.run(Thread.java:745)

when hbase stores pics or file  under 200k, it works well, but if you load file larger than 20M , hbase definitely down!
what's wrong with it ? can anyone help use? 

URGENT!!!

Qi Xiangming 		 	   		  

 		 	   		  

 		 	   		   		 	   		  

RE: hbase use error

Posted by QiXiangming <qi...@hotmail.com>.




Dear Yu, this is a snippet of log , thank you for diagnosis!
it seems the upstream socket cracks. but i can't find valuable clue from other datanode's log!
 
------------------------------------------------------------------------------------------------------
 
2014-09-17 11:02:30,058 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1977874513-192.168.20.245-1397637902470:blk_1073991434_253116
2014-09-17 11:02:32,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:32,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861, type=HAS_DOWNSTREAM_IN_PIPELINE: Thread is interrupted.
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863, type=HAS_DOWNSTREAM_IN_PIPELINE: Thread is interrupted.
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2014-09-17 11:02:32,362 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861 received exception java.io.IOException: Premature EOF from inputStream
2014-09-17 11:02:32,362 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863 received exception java.io.IOException: Premature EOF from inputStream
2014-09-17 11:02:32,362 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: slave3:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.248:55292 dest: /192.168.20.248:50010
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:32,362 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: slave3:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.248:55304 dest: /192.168.20.248:50010
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:33,957 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:33,958 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860, type=LAST_IN_PIPELINE, downstreams=0:[]: Thread is interrupted.
2014-09-17 11:02:33,958 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-09-17 11:02:33,958 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860 received exception java.io.IOException: Premature EOF from inputStream
2014-09-17 11:02:33,958 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: slave3:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.246:45823 dest: /192.168.20.248:50010

 
Date: Tue, 16 Sep 2014 19:50:47 -0700
Subject: Re: hbase use error
From: yuzhihong@gmail.com
To: user@hadoop.apache.org

Please pastebin the log instead of sending to me.
See https://issues.apache.org/jira/browse/HBASE-11339
It is under development.
On Tue, Sep 16, 2014 at 7:47 PM, QiXiangming <qi...@hotmail.com> wrote:



thank you for your reply so quickly.we cdh 5.store files directly in hbase , not path.
i have read some hbase schema design , says that it is recommended that for large file store path in hbase , and put real content in hdfs sequencefile. but i think 20M is not to big.
i download those log now , and send you later.
where can i find  HBASE-11339 HBase MOB?

Date: Tue, 16 Sep 2014 19:34:20 -0700
Subject: Re: hbase use error
From: yuzhihong@gmail.com
To: user@hadoop.apache.org
CC: common-dev@hadoop.apache.org

Which hadoop release are you using ?Can you pastebin more of the server logs ?
bq. load file larger than 20M
Do you store such file(s) directly on hdfs and put its path in hbase ?See HBASE-11339 HBase MOB
On Tue, Sep 16, 2014 at 7:29 PM, QiXiangming <qi...@hotmail.com> wrote:



hello ,everyone        i use hbase to store small pic or files , and meet an exception raised from hdfs, as following :
 slave2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.246:33162 dest: /192.168.20.247:50010
java.io.IOException: Premature EOF from inputStream
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
	at java.lang.Thread.run(Thread.java:745)

when hbase stores pics or file  under 200k, it works well, but if you load file larger than 20M , hbase definitely down!
what's wrong with it ? can anyone help use? 

URGENT!!!

Qi Xiangming 		 	   		  

 		 	   		  

 		 	   		   		 	   		  

RE: hbase use error

Posted by QiXiangming <qi...@hotmail.com>.




Dear Yu, this is a snippet of log , thank you for diagnosis!
it seems the upstream socket cracks. but i can't find valuable clue from other datanode's log!
 
------------------------------------------------------------------------------------------------------
 
2014-09-17 11:02:30,058 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1977874513-192.168.20.245-1397637902470:blk_1073991434_253116
2014-09-17 11:02:32,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:32,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861, type=HAS_DOWNSTREAM_IN_PIPELINE: Thread is interrupted.
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863, type=HAS_DOWNSTREAM_IN_PIPELINE: Thread is interrupted.
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2014-09-17 11:02:32,362 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861 received exception java.io.IOException: Premature EOF from inputStream
2014-09-17 11:02:32,362 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863 received exception java.io.IOException: Premature EOF from inputStream
2014-09-17 11:02:32,362 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: slave3:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.248:55292 dest: /192.168.20.248:50010
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:32,362 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: slave3:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.248:55304 dest: /192.168.20.248:50010
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:33,957 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:33,958 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860, type=LAST_IN_PIPELINE, downstreams=0:[]: Thread is interrupted.
2014-09-17 11:02:33,958 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-09-17 11:02:33,958 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860 received exception java.io.IOException: Premature EOF from inputStream
2014-09-17 11:02:33,958 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: slave3:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.246:45823 dest: /192.168.20.248:50010

 
Date: Tue, 16 Sep 2014 19:50:47 -0700
Subject: Re: hbase use error
From: yuzhihong@gmail.com
To: user@hadoop.apache.org

Please pastebin the log instead of sending to me.
See https://issues.apache.org/jira/browse/HBASE-11339
It is under development.
On Tue, Sep 16, 2014 at 7:47 PM, QiXiangming <qi...@hotmail.com> wrote:



thank you for your reply so quickly.we cdh 5.store files directly in hbase , not path.
i have read some hbase schema design , says that it is recommended that for large file store path in hbase , and put real content in hdfs sequencefile. but i think 20M is not to big.
i download those log now , and send you later.
where can i find  HBASE-11339 HBase MOB?

Date: Tue, 16 Sep 2014 19:34:20 -0700
Subject: Re: hbase use error
From: yuzhihong@gmail.com
To: user@hadoop.apache.org
CC: common-dev@hadoop.apache.org

Which hadoop release are you using ?Can you pastebin more of the server logs ?
bq. load file larger than 20M
Do you store such file(s) directly on hdfs and put its path in hbase ?See HBASE-11339 HBase MOB
On Tue, Sep 16, 2014 at 7:29 PM, QiXiangming <qi...@hotmail.com> wrote:



hello ,everyone        i use hbase to store small pic or files , and meet an exception raised from hdfs, as following :
 slave2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.246:33162 dest: /192.168.20.247:50010
java.io.IOException: Premature EOF from inputStream
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
	at java.lang.Thread.run(Thread.java:745)

when hbase stores pics or file  under 200k, it works well, but if you load file larger than 20M , hbase definitely down!
what's wrong with it ? can anyone help use? 

URGENT!!!

Qi Xiangming 		 	   		  

 		 	   		  

 		 	   		   		 	   		  

RE: hbase use error

Posted by QiXiangming <qi...@hotmail.com>.
Dear Yu, this is a snap of log , thank you for diagnosis!
it seems the upstream socket cracks. but i can't find valuable clue from other datanode's log!
 
------------------------------------------------------------------------------------------------------
 
2014-09-17 11:02:30,058 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1977874513-192.168.20.245-1397637902470:blk_1073991434_253116
2014-09-17 11:02:32,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:32,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861, type=HAS_DOWNSTREAM_IN_PIPELINE: Thread is interrupted.
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863, type=HAS_DOWNSTREAM_IN_PIPELINE: Thread is interrupted.
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2014-09-17 11:02:32,362 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861 received exception java.io.IOException: Premature EOF from inputStream
2014-09-17 11:02:32,362 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863 received exception java.io.IOException: Premature EOF from inputStream
2014-09-17 11:02:32,362 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: slave3:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.248:55292 dest: /192.168.20.248:50010
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:32,362 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: slave3:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.248:55304 dest: /192.168.20.248:50010
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:33,957 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:33,958 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860, type=LAST_IN_PIPELINE, downstreams=0:[]: Thread is interrupted.
2014-09-17 11:02:33,958 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-09-17 11:02:33,958 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860 received exception java.io.IOException: Premature EOF from inputStream
2014-09-17 11:02:33,958 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: slave3:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.246:45823 dest: /192.168.20.248:50010

 
Date: Tue, 16 Sep 2014 19:50:47 -0700
Subject: Re: hbase use error
From: yuzhihong@gmail.com
To: user@hadoop.apache.org

Please pastebin the log instead of sending to me.
See https://issues.apache.org/jira/browse/HBASE-11339
It is under development.
On Tue, Sep 16, 2014 at 7:47 PM, QiXiangming <qi...@hotmail.com> wrote:



thank you for your reply so quickly.we cdh 5.store files directly in hbase , not path.
i have read some hbase schema design , says that it is recommended that for large file store path in hbase , and put real content in hdfs sequencefile. but i think 20M is not to big.
i download those log now , and send you later.
where can i find  HBASE-11339 HBase MOB?

Date: Tue, 16 Sep 2014 19:34:20 -0700
Subject: Re: hbase use error
From: yuzhihong@gmail.com
To: user@hadoop.apache.org
CC: common-dev@hadoop.apache.org

Which hadoop release are you using ?Can you pastebin more of the server logs ?
bq. load file larger than 20M
Do you store such file(s) directly on hdfs and put its path in hbase ?See HBASE-11339 HBase MOB
On Tue, Sep 16, 2014 at 7:29 PM, QiXiangming <qi...@hotmail.com> wrote:



hello ,everyone        i use hbase to store small pic or files , and meet an exception raised from hdfs, as following :
 slave2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.246:33162 dest: /192.168.20.247:50010
java.io.IOException: Premature EOF from inputStream
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
	at java.lang.Thread.run(Thread.java:745)

when hbase stores pics or file  under 200k, it works well, but if you load file larger than 20M , hbase definitely down!
what's wrong with it ? can anyone help use? 

URGENT!!!

Qi Xiangming 		 	   		  

 		 	   		  

 		 	   		  

RE: hbase use error

Posted by QiXiangming <qi...@hotmail.com>.
Dear Yu, this is a snap of log , thank you for diagnosis!
it seems the upstream socket cracks. but i can't find valuable clue from other datanode's log!
 
------------------------------------------------------------------------------------------------------
 
2014-09-17 11:02:30,058 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1977874513-192.168.20.245-1397637902470:blk_1073991434_253116
2014-09-17 11:02:32,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:32,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861, type=HAS_DOWNSTREAM_IN_PIPELINE: Thread is interrupted.
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863, type=HAS_DOWNSTREAM_IN_PIPELINE: Thread is interrupted.
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2014-09-17 11:02:32,362 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861 received exception java.io.IOException: Premature EOF from inputStream
2014-09-17 11:02:32,362 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863 received exception java.io.IOException: Premature EOF from inputStream
2014-09-17 11:02:32,362 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: slave3:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.248:55292 dest: /192.168.20.248:50010
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:32,362 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: slave3:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.248:55304 dest: /192.168.20.248:50010
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:33,957 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:33,958 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860, type=LAST_IN_PIPELINE, downstreams=0:[]: Thread is interrupted.
2014-09-17 11:02:33,958 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-09-17 11:02:33,958 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860 received exception java.io.IOException: Premature EOF from inputStream
2014-09-17 11:02:33,958 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: slave3:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.246:45823 dest: /192.168.20.248:50010

 
Date: Tue, 16 Sep 2014 19:50:47 -0700
Subject: Re: hbase use error
From: yuzhihong@gmail.com
To: user@hadoop.apache.org

Please pastebin the log instead of sending to me.
See https://issues.apache.org/jira/browse/HBASE-11339
It is under development.
On Tue, Sep 16, 2014 at 7:47 PM, QiXiangming <qi...@hotmail.com> wrote:



thank you for your reply so quickly.we cdh 5.store files directly in hbase , not path.
i have read some hbase schema design , says that it is recommended that for large file store path in hbase , and put real content in hdfs sequencefile. but i think 20M is not to big.
i download those log now , and send you later.
where can i find  HBASE-11339 HBase MOB?

Date: Tue, 16 Sep 2014 19:34:20 -0700
Subject: Re: hbase use error
From: yuzhihong@gmail.com
To: user@hadoop.apache.org
CC: common-dev@hadoop.apache.org

Which hadoop release are you using ?Can you pastebin more of the server logs ?
bq. load file larger than 20M
Do you store such file(s) directly on hdfs and put its path in hbase ?See HBASE-11339 HBase MOB
On Tue, Sep 16, 2014 at 7:29 PM, QiXiangming <qi...@hotmail.com> wrote:



hello ,everyone        i use hbase to store small pic or files , and meet an exception raised from hdfs, as following :
 slave2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.246:33162 dest: /192.168.20.247:50010
java.io.IOException: Premature EOF from inputStream
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
	at java.lang.Thread.run(Thread.java:745)

when hbase stores pics or file  under 200k, it works well, but if you load file larger than 20M , hbase definitely down!
what's wrong with it ? can anyone help use? 

URGENT!!!

Qi Xiangming 		 	   		  

 		 	   		  

 		 	   		  

RE: hbase use error

Posted by QiXiangming <qi...@hotmail.com>.
Dear Yu, this is a snap of log , thank you for diagnosis!
it seems the upstream socket cracks. but i can't find valuable clue from other datanode's log!
 
------------------------------------------------------------------------------------------------------
 
2014-09-17 11:02:30,058 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1977874513-192.168.20.245-1397637902470:blk_1073991434_253116
2014-09-17 11:02:32,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:32,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861, type=HAS_DOWNSTREAM_IN_PIPELINE: Thread is interrupted.
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863, type=HAS_DOWNSTREAM_IN_PIPELINE: Thread is interrupted.
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2014-09-17 11:02:32,362 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861 received exception java.io.IOException: Premature EOF from inputStream
2014-09-17 11:02:32,362 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863 received exception java.io.IOException: Premature EOF from inputStream
2014-09-17 11:02:32,362 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: slave3:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.248:55292 dest: /192.168.20.248:50010
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:32,362 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: slave3:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.248:55304 dest: /192.168.20.248:50010
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:33,957 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:33,958 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860, type=LAST_IN_PIPELINE, downstreams=0:[]: Thread is interrupted.
2014-09-17 11:02:33,958 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-09-17 11:02:33,958 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860 received exception java.io.IOException: Premature EOF from inputStream
2014-09-17 11:02:33,958 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: slave3:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.246:45823 dest: /192.168.20.248:50010

 
Date: Tue, 16 Sep 2014 19:50:47 -0700
Subject: Re: hbase use error
From: yuzhihong@gmail.com
To: user@hadoop.apache.org

Please pastebin the log instead of sending to me.
See https://issues.apache.org/jira/browse/HBASE-11339
It is under development.
On Tue, Sep 16, 2014 at 7:47 PM, QiXiangming <qi...@hotmail.com> wrote:



thank you for your reply so quickly.we cdh 5.store files directly in hbase , not path.
i have read some hbase schema design , says that it is recommended that for large file store path in hbase , and put real content in hdfs sequencefile. but i think 20M is not to big.
i download those log now , and send you later.
where can i find  HBASE-11339 HBase MOB?

Date: Tue, 16 Sep 2014 19:34:20 -0700
Subject: Re: hbase use error
From: yuzhihong@gmail.com
To: user@hadoop.apache.org
CC: common-dev@hadoop.apache.org

Which hadoop release are you using ?Can you pastebin more of the server logs ?
bq. load file larger than 20M
Do you store such file(s) directly on hdfs and put its path in hbase ?See HBASE-11339 HBase MOB
On Tue, Sep 16, 2014 at 7:29 PM, QiXiangming <qi...@hotmail.com> wrote:



hello ,everyone        i use hbase to store small pic or files , and meet an exception raised from hdfs, as following :
 slave2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.246:33162 dest: /192.168.20.247:50010
java.io.IOException: Premature EOF from inputStream
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
	at java.lang.Thread.run(Thread.java:745)

when hbase stores pics or file  under 200k, it works well, but if you load file larger than 20M , hbase definitely down!
what's wrong with it ? can anyone help use? 

URGENT!!!

Qi Xiangming 		 	   		  

 		 	   		  

 		 	   		  

RE: hbase use error

Posted by QiXiangming <qi...@hotmail.com>.
Dear Yu, this is a snap of log , thank you for diagnosis!
it seems the upstream socket cracks. but i can't find valuable clue from other datanode's log!
 
------------------------------------------------------------------------------------------------------
 
2014-09-17 11:02:30,058 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1977874513-192.168.20.245-1397637902470:blk_1073991434_253116
2014-09-17 11:02:32,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:32,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861, type=HAS_DOWNSTREAM_IN_PIPELINE: Thread is interrupted.
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863, type=HAS_DOWNSTREAM_IN_PIPELINE: Thread is interrupted.
2014-09-17 11:02:32,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2014-09-17 11:02:32,362 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1977874513-192.168.20.245-1397637902470:blk_1074010154_271861 received exception java.io.IOException: Premature EOF from inputStream
2014-09-17 11:02:32,362 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1977874513-192.168.20.245-1397637902470:blk_1074010156_271863 received exception java.io.IOException: Premature EOF from inputStream
2014-09-17 11:02:32,362 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: slave3:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.248:55292 dest: /192.168.20.248:50010
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:32,362 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: slave3:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.248:55304 dest: /192.168.20.248:50010
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:33,957 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860
java.io.IOException: Premature EOF from inputStream
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
 at java.lang.Thread.run(Thread.java:745)
2014-09-17 11:02:33,958 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860, type=LAST_IN_PIPELINE, downstreams=0:[]: Thread is interrupted.
2014-09-17 11:02:33,958 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-09-17 11:02:33,958 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1977874513-192.168.20.245-1397637902470:blk_1074010153_271860 received exception java.io.IOException: Premature EOF from inputStream
2014-09-17 11:02:33,958 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: slave3:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.246:45823 dest: /192.168.20.248:50010

 
Date: Tue, 16 Sep 2014 19:50:47 -0700
Subject: Re: hbase use error
From: yuzhihong@gmail.com
To: user@hadoop.apache.org

Please pastebin the log instead of sending to me.
See https://issues.apache.org/jira/browse/HBASE-11339
It is under development.
On Tue, Sep 16, 2014 at 7:47 PM, QiXiangming <qi...@hotmail.com> wrote:



thank you for your reply so quickly.we cdh 5.store files directly in hbase , not path.
i have read some hbase schema design , says that it is recommended that for large file store path in hbase , and put real content in hdfs sequencefile. but i think 20M is not to big.
i download those log now , and send you later.
where can i find  HBASE-11339 HBase MOB?

Date: Tue, 16 Sep 2014 19:34:20 -0700
Subject: Re: hbase use error
From: yuzhihong@gmail.com
To: user@hadoop.apache.org
CC: common-dev@hadoop.apache.org

Which hadoop release are you using ?Can you pastebin more of the server logs ?
bq. load file larger than 20M
Do you store such file(s) directly on hdfs and put its path in hbase ?See HBASE-11339 HBase MOB
On Tue, Sep 16, 2014 at 7:29 PM, QiXiangming <qi...@hotmail.com> wrote:



hello ,everyone        i use hbase to store small pic or files , and meet an exception raised from hdfs, as following :
 slave2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.246:33162 dest: /192.168.20.247:50010
java.io.IOException: Premature EOF from inputStream
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
	at java.lang.Thread.run(Thread.java:745)

when hbase stores pics or file  under 200k, it works well, but if you load file larger than 20M , hbase definitely down!
what's wrong with it ? can anyone help use? 

URGENT!!!

Qi Xiangming 		 	   		  

 		 	   		  

 		 	   		  

Re: hbase use error

Posted by Ted Yu <yu...@gmail.com>.
Please pastebin the log instead of sending to me.

See https://issues.apache.org/jira/browse/HBASE-11339
It is under development.

On Tue, Sep 16, 2014 at 7:47 PM, QiXiangming <qi...@hotmail.com>
wrote:

> thank you for your reply so quickly.
> we cdh 5.
> store files directly in hbase , not path.
>
> i have read some hbase schema design , says that it is recommended that
> for large file store path in hbase , and put real content in hdfs
> sequencefile. but i think 20M is not to big.
>
> i download those log now , and send you later.
>
> where can i find  HBASE-11339 HBase MOB?
>
> ------------------------------
> Date: Tue, 16 Sep 2014 19:34:20 -0700
> Subject: Re: hbase use error
> From: yuzhihong@gmail.com
> To: user@hadoop.apache.org
> CC: common-dev@hadoop.apache.org
>
>
> Which hadoop release are you using ?
> Can you pastebin more of the server logs ?
>
> bq. load file larger than 20M
>
> Do you store such file(s) directly on hdfs and put its path in hbase ?
> See HBASE-11339 HBase MOB
>
> On Tue, Sep 16, 2014 at 7:29 PM, QiXiangming <qi...@hotmail.com>
> wrote:
>
> hello ,everyone
>         i use hbase to store small pic or files , and meet an exception
> raised from hdfs, as following :
>
>  slave2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /
> 192.168.20.246:33162 dest: /192.168.20.247:50010
> java.io.IOException: Premature EOF from inputStream
> at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
>
> at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
>
> at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
>
> at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
>
> at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
> at java.lang.Thread.run(Thread.java:745)
>
> when hbase stores pics or file  under 200k, it works well,
> but if you load file larger than 20M , hbase definitely down!
>
> what's wrong with it ?
> can anyone help use?
>
>
> URGENT!!!
>
>
> Qi Xiangming
>
>
>

Re: hbase use error

Posted by Ted Yu <yu...@gmail.com>.
Please pastebin the log instead of sending to me.

See https://issues.apache.org/jira/browse/HBASE-11339
It is under development.

On Tue, Sep 16, 2014 at 7:47 PM, QiXiangming <qi...@hotmail.com>
wrote:

> thank you for your reply so quickly.
> we cdh 5.
> store files directly in hbase , not path.
>
> i have read some hbase schema design , says that it is recommended that
> for large file store path in hbase , and put real content in hdfs
> sequencefile. but i think 20M is not to big.
>
> i download those log now , and send you later.
>
> where can i find  HBASE-11339 HBase MOB?
>
> ------------------------------
> Date: Tue, 16 Sep 2014 19:34:20 -0700
> Subject: Re: hbase use error
> From: yuzhihong@gmail.com
> To: user@hadoop.apache.org
> CC: common-dev@hadoop.apache.org
>
>
> Which hadoop release are you using ?
> Can you pastebin more of the server logs ?
>
> bq. load file larger than 20M
>
> Do you store such file(s) directly on hdfs and put its path in hbase ?
> See HBASE-11339 HBase MOB
>
> On Tue, Sep 16, 2014 at 7:29 PM, QiXiangming <qi...@hotmail.com>
> wrote:
>
> hello ,everyone
>         i use hbase to store small pic or files , and meet an exception
> raised from hdfs, as following :
>
>  slave2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /
> 192.168.20.246:33162 dest: /192.168.20.247:50010
> java.io.IOException: Premature EOF from inputStream
> at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
>
> at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
>
> at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
>
> at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
>
> at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
> at java.lang.Thread.run(Thread.java:745)
>
> when hbase stores pics or file  under 200k, it works well,
> but if you load file larger than 20M , hbase definitely down!
>
> what's wrong with it ?
> can anyone help use?
>
>
> URGENT!!!
>
>
> Qi Xiangming
>
>
>

Re: hbase use error

Posted by Ted Yu <yu...@gmail.com>.
Please pastebin the log instead of sending to me.

See https://issues.apache.org/jira/browse/HBASE-11339
It is under development.

On Tue, Sep 16, 2014 at 7:47 PM, QiXiangming <qi...@hotmail.com>
wrote:

> thank you for your reply so quickly.
> we cdh 5.
> store files directly in hbase , not path.
>
> i have read some hbase schema design , says that it is recommended that
> for large file store path in hbase , and put real content in hdfs
> sequencefile. but i think 20M is not to big.
>
> i download those log now , and send you later.
>
> where can i find  HBASE-11339 HBase MOB?
>
> ------------------------------
> Date: Tue, 16 Sep 2014 19:34:20 -0700
> Subject: Re: hbase use error
> From: yuzhihong@gmail.com
> To: user@hadoop.apache.org
> CC: common-dev@hadoop.apache.org
>
>
> Which hadoop release are you using ?
> Can you pastebin more of the server logs ?
>
> bq. load file larger than 20M
>
> Do you store such file(s) directly on hdfs and put its path in hbase ?
> See HBASE-11339 HBase MOB
>
> On Tue, Sep 16, 2014 at 7:29 PM, QiXiangming <qi...@hotmail.com>
> wrote:
>
> hello ,everyone
>         i use hbase to store small pic or files , and meet an exception
> raised from hdfs, as following :
>
>  slave2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /
> 192.168.20.246:33162 dest: /192.168.20.247:50010
> java.io.IOException: Premature EOF from inputStream
> at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
>
> at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
>
> at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
>
> at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
>
> at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
> at java.lang.Thread.run(Thread.java:745)
>
> when hbase stores pics or file  under 200k, it works well,
> but if you load file larger than 20M , hbase definitely down!
>
> what's wrong with it ?
> can anyone help use?
>
>
> URGENT!!!
>
>
> Qi Xiangming
>
>
>

Re: hbase use error

Posted by Ted Yu <yu...@gmail.com>.
Please pastebin the log instead of sending to me.

See https://issues.apache.org/jira/browse/HBASE-11339
It is under development.

On Tue, Sep 16, 2014 at 7:47 PM, QiXiangming <qi...@hotmail.com>
wrote:

> thank you for your reply so quickly.
> we cdh 5.
> store files directly in hbase , not path.
>
> i have read some hbase schema design , says that it is recommended that
> for large file store path in hbase , and put real content in hdfs
> sequencefile. but i think 20M is not to big.
>
> i download those log now , and send you later.
>
> where can i find  HBASE-11339 HBase MOB?
>
> ------------------------------
> Date: Tue, 16 Sep 2014 19:34:20 -0700
> Subject: Re: hbase use error
> From: yuzhihong@gmail.com
> To: user@hadoop.apache.org
> CC: common-dev@hadoop.apache.org
>
>
> Which hadoop release are you using ?
> Can you pastebin more of the server logs ?
>
> bq. load file larger than 20M
>
> Do you store such file(s) directly on hdfs and put its path in hbase ?
> See HBASE-11339 HBase MOB
>
> On Tue, Sep 16, 2014 at 7:29 PM, QiXiangming <qi...@hotmail.com>
> wrote:
>
> hello ,everyone
>         i use hbase to store small pic or files , and meet an exception
> raised from hdfs, as following :
>
>  slave2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /
> 192.168.20.246:33162 dest: /192.168.20.247:50010
> java.io.IOException: Premature EOF from inputStream
> at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
>
> at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
>
> at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
>
> at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
>
> at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
> at java.lang.Thread.run(Thread.java:745)
>
> when hbase stores pics or file  under 200k, it works well,
> but if you load file larger than 20M , hbase definitely down!
>
> what's wrong with it ?
> can anyone help use?
>
>
> URGENT!!!
>
>
> Qi Xiangming
>
>
>

RE: hbase use error

Posted by QiXiangming <qi...@hotmail.com>.
thank you for your reply so quickly.we cdh 5.store files directly in hbase , not path.
i have read some hbase schema design , says that it is recommended that for large file store path in hbase , and put real content in hdfs sequencefile. but i think 20M is not to big.
i download those log now , and send you later.
where can i find  HBASE-11339 HBase MOB?

Date: Tue, 16 Sep 2014 19:34:20 -0700
Subject: Re: hbase use error
From: yuzhihong@gmail.com
To: user@hadoop.apache.org
CC: common-dev@hadoop.apache.org

Which hadoop release are you using ?Can you pastebin more of the server logs ?
bq. load file larger than 20M
Do you store such file(s) directly on hdfs and put its path in hbase ?See HBASE-11339 HBase MOB
On Tue, Sep 16, 2014 at 7:29 PM, QiXiangming <qi...@hotmail.com> wrote:



hello ,everyone        i use hbase to store small pic or files , and meet an exception raised from hdfs, as following :
 slave2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.246:33162 dest: /192.168.20.247:50010
java.io.IOException: Premature EOF from inputStream
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
	at java.lang.Thread.run(Thread.java:745)

when hbase stores pics or file  under 200k, it works well, but if you load file larger than 20M , hbase definitely down!
what's wrong with it ? can anyone help use? 

URGENT!!!

Qi Xiangming 		 	   		  

 		 	   		  

RE: hbase use error

Posted by mike Zarrin <mi...@unitedrmr.com>.
Please unsubscribe me

 

From: Ted Yu [mailto:yuzhihong@gmail.com] 
Sent: Tuesday, September 16, 2014 7:34 PM
To: common-user@hadoop.apache.org
Cc: common-dev@hadoop.apache.org
Subject: Re: hbase use error

 

Which hadoop release are you using ?

Can you pastebin more of the server logs ?

 

bq. load file larger than 20M

 

Do you store such file(s) directly on hdfs and put its path in hbase ?

See HBASE-11339 HBase MOB

 

On Tue, Sep 16, 2014 at 7:29 PM, QiXiangming <qi...@hotmail.com> wrote:

hello ,everyone

        i use hbase to store small pic or files , and meet an exception raised from hdfs, as following :

 

 slave2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.246:33162 dest: /192.168.20.247:50010
java.io.IOException: Premature EOF from inputStream
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
at java.lang.Thread.run(Thread.java:745)

 

when hbase stores pics or file  under 200k, it works well, 

but if you load file larger than 20M , hbase definitely down!

 

what's wrong with it ? 

can anyone help use? 

 

 

URGENT!!!

 

 

Qi Xiangming

 


RE: hbase use error

Posted by QiXiangming <qi...@hotmail.com>.
thank you for your reply so quickly.we cdh 5.store files directly in hbase , not path.
i have read some hbase schema design , says that it is recommended that for large file store path in hbase , and put real content in hdfs sequencefile. but i think 20M is not to big.
i download those log now , and send you later.
where can i find  HBASE-11339 HBase MOB?

Date: Tue, 16 Sep 2014 19:34:20 -0700
Subject: Re: hbase use error
From: yuzhihong@gmail.com
To: user@hadoop.apache.org
CC: common-dev@hadoop.apache.org

Which hadoop release are you using ?Can you pastebin more of the server logs ?
bq. load file larger than 20M
Do you store such file(s) directly on hdfs and put its path in hbase ?See HBASE-11339 HBase MOB
On Tue, Sep 16, 2014 at 7:29 PM, QiXiangming <qi...@hotmail.com> wrote:



hello ,everyone        i use hbase to store small pic or files , and meet an exception raised from hdfs, as following :
 slave2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.246:33162 dest: /192.168.20.247:50010
java.io.IOException: Premature EOF from inputStream
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
	at java.lang.Thread.run(Thread.java:745)

when hbase stores pics or file  under 200k, it works well, but if you load file larger than 20M , hbase definitely down!
what's wrong with it ? can anyone help use? 

URGENT!!!

Qi Xiangming 		 	   		  

 		 	   		  

RE: hbase use error

Posted by mike Zarrin <mi...@unitedrmr.com>.
Please unsubscribe me

 

From: Ted Yu [mailto:yuzhihong@gmail.com] 
Sent: Tuesday, September 16, 2014 7:34 PM
To: common-user@hadoop.apache.org
Cc: common-dev@hadoop.apache.org
Subject: Re: hbase use error

 

Which hadoop release are you using ?

Can you pastebin more of the server logs ?

 

bq. load file larger than 20M

 

Do you store such file(s) directly on hdfs and put its path in hbase ?

See HBASE-11339 HBase MOB

 

On Tue, Sep 16, 2014 at 7:29 PM, QiXiangming <qi...@hotmail.com> wrote:

hello ,everyone

        i use hbase to store small pic or files , and meet an exception raised from hdfs, as following :

 

 slave2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.246:33162 dest: /192.168.20.247:50010
java.io.IOException: Premature EOF from inputStream
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
at java.lang.Thread.run(Thread.java:745)

 

when hbase stores pics or file  under 200k, it works well, 

but if you load file larger than 20M , hbase definitely down!

 

what's wrong with it ? 

can anyone help use? 

 

 

URGENT!!!

 

 

Qi Xiangming

 


RE: hbase use error

Posted by mike Zarrin <mi...@unitedrmr.com>.
Please unsubscribe me

 

From: Ted Yu [mailto:yuzhihong@gmail.com] 
Sent: Tuesday, September 16, 2014 7:34 PM
To: common-user@hadoop.apache.org
Cc: common-dev@hadoop.apache.org
Subject: Re: hbase use error

 

Which hadoop release are you using ?

Can you pastebin more of the server logs ?

 

bq. load file larger than 20M

 

Do you store such file(s) directly on hdfs and put its path in hbase ?

See HBASE-11339 HBase MOB

 

On Tue, Sep 16, 2014 at 7:29 PM, QiXiangming <qi...@hotmail.com> wrote:

hello ,everyone

        i use hbase to store small pic or files , and meet an exception raised from hdfs, as following :

 

 slave2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.246:33162 dest: /192.168.20.247:50010
java.io.IOException: Premature EOF from inputStream
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
at java.lang.Thread.run(Thread.java:745)

 

when hbase stores pics or file  under 200k, it works well, 

but if you load file larger than 20M , hbase definitely down!

 

what's wrong with it ? 

can anyone help use? 

 

 

URGENT!!!

 

 

Qi Xiangming

 


RE: hbase use error

Posted by QiXiangming <qi...@hotmail.com>.
thank you for your reply so quickly.we cdh 5.store files directly in hbase , not path.
i have read some hbase schema design , says that it is recommended that for large file store path in hbase , and put real content in hdfs sequencefile. but i think 20M is not to big.
i download those log now , and send you later.
where can i find  HBASE-11339 HBase MOB?

Date: Tue, 16 Sep 2014 19:34:20 -0700
Subject: Re: hbase use error
From: yuzhihong@gmail.com
To: user@hadoop.apache.org
CC: common-dev@hadoop.apache.org

Which hadoop release are you using ?Can you pastebin more of the server logs ?
bq. load file larger than 20M
Do you store such file(s) directly on hdfs and put its path in hbase ?See HBASE-11339 HBase MOB
On Tue, Sep 16, 2014 at 7:29 PM, QiXiangming <qi...@hotmail.com> wrote:



hello ,everyone        i use hbase to store small pic or files , and meet an exception raised from hdfs, as following :
 slave2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.246:33162 dest: /192.168.20.247:50010
java.io.IOException: Premature EOF from inputStream
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
	at java.lang.Thread.run(Thread.java:745)

when hbase stores pics or file  under 200k, it works well, but if you load file larger than 20M , hbase definitely down!
what's wrong with it ? can anyone help use? 

URGENT!!!

Qi Xiangming 		 	   		  

 		 	   		  

RE: hbase use error

Posted by QiXiangming <qi...@hotmail.com>.
thank you for your reply so quickly.we cdh 5.store files directly in hbase , not path.
i have read some hbase schema design , says that it is recommended that for large file store path in hbase , and put real content in hdfs sequencefile. but i think 20M is not to big.
i download those log now , and send you later.
where can i find  HBASE-11339 HBase MOB?

Date: Tue, 16 Sep 2014 19:34:20 -0700
Subject: Re: hbase use error
From: yuzhihong@gmail.com
To: user@hadoop.apache.org
CC: common-dev@hadoop.apache.org

Which hadoop release are you using ?Can you pastebin more of the server logs ?
bq. load file larger than 20M
Do you store such file(s) directly on hdfs and put its path in hbase ?See HBASE-11339 HBase MOB
On Tue, Sep 16, 2014 at 7:29 PM, QiXiangming <qi...@hotmail.com> wrote:



hello ,everyone        i use hbase to store small pic or files , and meet an exception raised from hdfs, as following :
 slave2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.246:33162 dest: /192.168.20.247:50010
java.io.IOException: Premature EOF from inputStream
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
	at java.lang.Thread.run(Thread.java:745)

when hbase stores pics or file  under 200k, it works well, but if you load file larger than 20M , hbase definitely down!
what's wrong with it ? can anyone help use? 

URGENT!!!

Qi Xiangming 		 	   		  

 		 	   		  

RE: hbase use error

Posted by mike Zarrin <mi...@unitedrmr.com>.
Please unsubscribe me

 

From: Ted Yu [mailto:yuzhihong@gmail.com] 
Sent: Tuesday, September 16, 2014 7:34 PM
To: common-user@hadoop.apache.org
Cc: common-dev@hadoop.apache.org
Subject: Re: hbase use error

 

Which hadoop release are you using ?

Can you pastebin more of the server logs ?

 

bq. load file larger than 20M

 

Do you store such file(s) directly on hdfs and put its path in hbase ?

See HBASE-11339 HBase MOB

 

On Tue, Sep 16, 2014 at 7:29 PM, QiXiangming <qi...@hotmail.com> wrote:

hello ,everyone

        i use hbase to store small pic or files , and meet an exception raised from hdfs, as following :

 

 slave2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.20.246:33162 dest: /192.168.20.247:50010
java.io.IOException: Premature EOF from inputStream
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
at java.lang.Thread.run(Thread.java:745)

 

when hbase stores pics or file  under 200k, it works well, 

but if you load file larger than 20M , hbase definitely down!

 

what's wrong with it ? 

can anyone help use? 

 

 

URGENT!!!

 

 

Qi Xiangming

 


Re: hbase use error

Posted by Ted Yu <yu...@gmail.com>.
Which hadoop release are you using ?
Can you pastebin more of the server logs ?

bq. load file larger than 20M

Do you store such file(s) directly on hdfs and put its path in hbase ?
See HBASE-11339 HBase MOB

On Tue, Sep 16, 2014 at 7:29 PM, QiXiangming <qi...@hotmail.com>
wrote:

> hello ,everyone
>         i use hbase to store small pic or files , and meet an exception
> raised from hdfs, as following :
>
>  slave2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /
> 192.168.20.246:33162 dest: /192.168.20.247:50010
> java.io.IOException: Premature EOF from inputStream
> at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
>
> at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
>
> at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
>
> at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
>
> at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
> at java.lang.Thread.run(Thread.java:745)
>
> when hbase stores pics or file  under 200k, it works well,
> but if you load file larger than 20M , hbase definitely down!
>
> what's wrong with it ?
> can anyone help use?
>
>
> URGENT!!!
>
>
> Qi Xiangming
>

Re: hbase use error

Posted by Ted Yu <yu...@gmail.com>.
Which hadoop release are you using ?
Can you pastebin more of the server logs ?

bq. load file larger than 20M

Do you store such file(s) directly on hdfs and put its path in hbase ?
See HBASE-11339 HBase MOB

On Tue, Sep 16, 2014 at 7:29 PM, QiXiangming <qi...@hotmail.com>
wrote:

> hello ,everyone
>         i use hbase to store small pic or files , and meet an exception
> raised from hdfs, as following :
>
>  slave2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /
> 192.168.20.246:33162 dest: /192.168.20.247:50010
> java.io.IOException: Premature EOF from inputStream
> at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
>
> at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
>
> at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
>
> at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
>
> at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
> at java.lang.Thread.run(Thread.java:745)
>
> when hbase stores pics or file  under 200k, it works well,
> but if you load file larger than 20M , hbase definitely down!
>
> what's wrong with it ?
> can anyone help use?
>
>
> URGENT!!!
>
>
> Qi Xiangming
>

Re: hbase use error

Posted by Ted Yu <yu...@gmail.com>.
Which hadoop release are you using ?
Can you pastebin more of the server logs ?

bq. load file larger than 20M

Do you store such file(s) directly on hdfs and put its path in hbase ?
See HBASE-11339 HBase MOB

On Tue, Sep 16, 2014 at 7:29 PM, QiXiangming <qi...@hotmail.com>
wrote:

> hello ,everyone
>         i use hbase to store small pic or files , and meet an exception
> raised from hdfs, as following :
>
>  slave2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /
> 192.168.20.246:33162 dest: /192.168.20.247:50010
> java.io.IOException: Premature EOF from inputStream
> at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
>
> at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
>
> at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
>
> at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
>
> at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
> at java.lang.Thread.run(Thread.java:745)
>
> when hbase stores pics or file  under 200k, it works well,
> but if you load file larger than 20M , hbase definitely down!
>
> what's wrong with it ?
> can anyone help use?
>
>
> URGENT!!!
>
>
> Qi Xiangming
>

Re: hbase use error

Posted by Ted Yu <yu...@gmail.com>.
Which hadoop release are you using ?
Can you pastebin more of the server logs ?

bq. load file larger than 20M

Do you store such file(s) directly on hdfs and put its path in hbase ?
See HBASE-11339 HBase MOB

On Tue, Sep 16, 2014 at 7:29 PM, QiXiangming <qi...@hotmail.com>
wrote:

> hello ,everyone
>         i use hbase to store small pic or files , and meet an exception
> raised from hdfs, as following :
>
>  slave2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /
> 192.168.20.246:33162 dest: /192.168.20.247:50010
> java.io.IOException: Premature EOF from inputStream
> at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
>
> at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
>
> at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
>
> at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
>
> at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
> at java.lang.Thread.run(Thread.java:745)
>
> when hbase stores pics or file  under 200k, it works well,
> but if you load file larger than 20M , hbase definitely down!
>
> what's wrong with it ?
> can anyone help use?
>
>
> URGENT!!!
>
>
> Qi Xiangming
>

Re: hbase use error

Posted by Ted Yu <yu...@gmail.com>.
Which hadoop release are you using ?
Can you pastebin more of the server logs ?

bq. load file larger than 20M

Do you store such file(s) directly on hdfs and put its path in hbase ?
See HBASE-11339 HBase MOB

On Tue, Sep 16, 2014 at 7:29 PM, QiXiangming <qi...@hotmail.com>
wrote:

> hello ,everyone
>         i use hbase to store small pic or files , and meet an exception
> raised from hdfs, as following :
>
>  slave2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /
> 192.168.20.246:33162 dest: /192.168.20.247:50010
> java.io.IOException: Premature EOF from inputStream
> at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
>
> at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
>
> at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
>
> at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
>
> at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
>
> at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
> at java.lang.Thread.run(Thread.java:745)
>
> when hbase stores pics or file  under 200k, it works well,
> but if you load file larger than 20M , hbase definitely down!
>
> what's wrong with it ?
> can anyone help use?
>
>
> URGENT!!!
>
>
> Qi Xiangming
>