You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@htrace.apache.org by Mihir Monani <mm...@salesforce.com> on 2017/01/29 06:05:10 UTC
Error in Detaching Span
Hi Everyone,
This is my configuration for HTrace setup
HBase 1.1.8
Hadoop 2.7.3
HTrace 3.2.0-incubating
HTraced Server
When i am running hbase load client with htrace enabled , it throws below
error in RegionServer logs
*RegionServer Log :-*
2017-01-28 13:42:57,231 DEBUG [8,queue=3,port=16020] regionserver.HRegion -
Flush requested on
HT,EXCHG44:TOPIC79:00000000,1485610936283.4fa27d4c74b512d9611d48b9d241fc00.
2017-01-28 13:42:57,233 DEBUG [MemStoreFlusher.0]
regionserver.FlushLargeStoresPolicy - Column Family: HCF of region
HT,EXCHG44:TOPIC79:00000000,1485610936283.4fa27d4c74b512d9611d48b9d241fc00.
will be flushed because of memstoreSize(294912408) is larger than lower
bound(16777216)
2017-01-28 13:42:57,233 INFO [MemStoreFlusher.0] regionserver.HRegion -
Started memstore flush for
HT,EXCHG44:TOPIC79:00000000,1485610936283.4fa27d4c74b512d9611d48b9d241fc00.,
current region memstore size 281.25 MB, and 1/1 column families' memstores
are being flushed.
2017-01-28 13:42:57,295 ERROR [:blk_1073741855_1031] htrace.Tracer - Tried
to detach trace span
{"s":"5fbda99f463c751a","b":1485610977155,"e":1485610977223,"d":"dataStreamer","p":["77ba839770805ba6"]}
but it has already been detached.
2017-01-28 13:42:57,296 WARN [:blk_1073741855_1031] hdfs.DFSClient -
DataStreamer Exception
java.lang.RuntimeException: Tried to detach trace span
{"s":"5fbda99f463c751a","b":1485610977155,"e":1485610977223,"d":"dataStreamer","p":["77ba839770805ba6"]}
but it has already been detached.
at org.apache.htrace.Tracer.clientError(Tracer.java:60)
at org.apache.htrace.TraceScope.detach(TraceScope.java:57)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:584)
2017-01-28 13:42:57,300 WARN [sync.0] hdfs.DFSClient - Error while syncing
java.io.IOException: DataStreamer Exception:
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:655)
Caused by: java.lang.RuntimeException: Tried to detach trace span
{"s":"5fbda99f463c751a","b":1485610977155,"e":1485610977223,"d":"dataStreamer","p":["77ba839770805ba6"]}
but it has already been detached.
at org.apache.htrace.Tracer.clientError(Tracer.java:60)
at org.apache.htrace.TraceScope.detach(TraceScope.java:57)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:584)
2017-01-28 13:42:57,300 ERROR [sync.0] wal.FSHLog - Error syncing, request
close of WAL
java.io.IOException: DataStreamer Exception:
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:655)
Caused by: java.lang.RuntimeException: Tried to detach trace span
{"s":"5fbda99f463c751a","b":1485610977155,"e":1485610977223,"d":"dataStreamer","p":["77ba839770805ba6"]}
but it has already been detached.
at org.apache.htrace.Tracer.clientError(Tracer.java:60)
at org.apache.htrace.TraceScope.detach(TraceScope.java:57)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:584)
2017-01-28 13:42:57,301 DEBUG [9.18:16020.logRoller] regionserver.LogRoller
- WAL roll requested
2017-01-28 13:42:57,302 ERROR [MemStoreFlusher.0]
regionserver.MemStoreFlusher - Cache flush failed for region
HT,EXCHG44:TOPIC79:00000000,1485610936283.4fa27d4c74b512d9611d48b9d241fc00.
java.io.IOException: DataStreamer Exception:
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:655)
Caused by: java.lang.RuntimeException: Tried to detach trace span
{"s":"5fbda99f463c751a","b":1485610977155,"e":1485610977223,"d":"dataStreamer","p":["77ba839770805ba6"]}
but it has already been detached.
at org.apache.htrace.Tracer.clientError(Tracer.java:60)
at org.apache.htrace.TraceScope.detach(TraceScope.java:57)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:584)
2017-01-28 13:42:57,550 ERROR [sync.1] wal.FSHLog - Error syncing, request
close of WAL
java.io.IOException: DataStreamer Exception:
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:655)
Caused by: java.lang.RuntimeException: Tried to detach trace span
{"s":"5fbda99f463c751a","b":1485610977155,"e":1485610977223,"d":"dataStreamer","p":["77ba839770805ba6"]}
but it has already been detached.
at org.apache.htrace.Tracer.clientError(Tracer.java:60)
at org.apache.htrace.TraceScope.detach(TraceScope.java:57)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:584)
2017-01-28 13:42:57,551 ERROR [9.18:16020.logRoller] wal.FSHLog - Failed
close of WAL writer
hdfs://localhost:9000/hbase/WALs/localhost,16020,1485610340203/localhost%2C16020%2C1485610340203.default.1485610343936,
unflushedEntries=2
org.apache.hadoop.hbase.regionserver.wal.FailedSyncBeforeLogCloseException:
java.io.IOException: DataStreamer Exception:
at
org.apache.hadoop.hbase.regionserver.wal.FSHLog$SafePointZigZagLatch.waitSafePoint(FSHLog.java:1946)
at
org.apache.hadoop.hbase.regionserver.wal.FSHLog.replaceWriter(FSHLog.java:985)
at
org.apache.hadoop.hbase.regionserver.wal.FSHLog.rollWriter(FSHLog.java:739)
at
org.apache.hadoop.hbase.regionserver.LogRoller.run(LogRoller.java:148)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: DataStreamer Exception:
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:655)
Caused by: java.lang.RuntimeException: Tried to detach trace span
{"s":"5fbda99f463c751a","b":1485610977155,"e":1485610977223,"d":"dataStreamer","p":["77ba839770805ba6"]}
but it has already been detached.
at org.apache.htrace.Tracer.clientError(Tracer.java:60)
at org.apache.htrace.TraceScope.detach(TraceScope.java:57)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:584)
*For Same time , DataNode log*
2017-01-28 13:42:54,953 ERROR [blk_1073741899_1087]] datanode.DataNode -
localhost:50010:DataXceiver error processing WRITE_BLOCK operation src:
/host_.22:58408 dst: /host_.18:50010
java.io.IOException: Premature EOF from inputStream
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201)
at
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
at
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
at
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:513)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:912)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:815)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:254)
at java.lang.Thread.run(Thread.java:745)
2017-01-28 13:42:57,300 INFO [blk_1073741855_1031]] datanode.DataNode -
Exception for BP-281614574-host_.16-1485609769526:blk_1073741855_1031
java.io.IOException: Premature EOF from inputStream
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201)
at
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
at
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
at
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:513)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:912)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:815)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:254)
at java.lang.Thread.run(Thread.java:745)
2017-01-28 13:42:57,300 INFO [WNSTREAM_IN_PIPELINE] datanode.DataNode -
PacketResponder: BP-281614574-host_.16-1485609769526:blk_1073741855_1031,
type=HAS_DOWNSTREAM_IN_PIPELINE: Thread is interrupted.
2017-01-28 13:42:57,300 INFO [WNSTREAM_IN_PIPELINE] datanode.DataNode -
PacketResponder: BP-281614574-host_.16-1485609769526:blk_1073741855_1031,
type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2017-01-28 13:42:57,397 INFO [blk_1073741855_1031]] datanode.DataNode -
opWriteBlock BP-281614574-host_.16-1485609769526:blk_1073741855_1031
received exception java.io.IOException: Premature EOF from inputStream
2017-01-28 13:42:57,398 ERROR [blk_1073741855_1031]] datanode.DataNode -
localhost:50010:DataXceiver error processing WRITE_BLOCK operation src:
/host_.18:55410 dst: /host_.18:50010
java.io.IOException: Premature EOF from inputStream
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201)
at
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
at
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
at
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:513)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:912)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:815)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:254)
at java.lang.Thread.run(Thread.java:745)
2017-01-28 13:42:57,467 INFO [blk_1073741946_1137]] datanode.DataNode -
Receiving BP-281614574-host_.16-1485609769526:blk_1073741946_1137 src:
/host_.18:56726 dest: /host_.18:50010
2017-01-28 13:43:54,832 ERROR [ng for operation #1]] datanode.DataNode -
localhost:50010:DataXceiver error processing unknown operation src:
/host_138:46052 dst: /host_.18:50010
java.io.EOFException
at java.io.DataInputStream.readShort(DataInputStream.java:315)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:230)
at java.lang.Thread.run(Thread.java:745)
2017-01-28 13:44:49,977 INFO [WNSTREAM_IN_PIPELINE] datanode.DataNode -
Received BP-281614574-host_.16-1485609769526:blk_1073741938_1129 size 83
from /host_.24:43496
I am not able to understand where this error is starting, in HDFS layer or
HTrace client side.
Can someone point me to right direction?
Thanks,
Mihir Monani
Re: Error in Detaching Span
Posted by Colin McCabe <cm...@apache.org>.
Hi Mihir,
This is most likely an error in the HBase client. It sounds like the
client is trying to detach a span that has already been detached.
Detaching is the process of moving a span from one thread to another.
Have you asked this question on the HBase mailing list?
best,
Colin
On Sat, Jan 28, 2017, at 22:05, Mihir Monani wrote:
> Hi Everyone,
>
> This is my configuration for HTrace setup
>
> HBase 1.1.8
> Hadoop 2.7.3
> HTrace 3.2.0-incubating
> HTraced Server
>
> When i am running hbase load client with htrace enabled , it throws below
> error in RegionServer logs
>
> *RegionServer Log :-*
>
> 2017-01-28 13:42:57,231 DEBUG [8,queue=3,port=16020] regionserver.HRegion
> -
> Flush requested on
> HT,EXCHG44:TOPIC79:00000000,1485610936283.4fa27d4c74b512d9611d48b9d241fc00.
> 2017-01-28 13:42:57,233 DEBUG [MemStoreFlusher.0]
> regionserver.FlushLargeStoresPolicy - Column Family: HCF of region
> HT,EXCHG44:TOPIC79:00000000,1485610936283.4fa27d4c74b512d9611d48b9d241fc00.
> will be flushed because of memstoreSize(294912408) is larger than lower
> bound(16777216)
> 2017-01-28 13:42:57,233 INFO [MemStoreFlusher.0] regionserver.HRegion -
> Started memstore flush for
> HT,EXCHG44:TOPIC79:00000000,1485610936283.4fa27d4c74b512d9611d48b9d241fc00.,
> current region memstore size 281.25 MB, and 1/1 column families'
> memstores
> are being flushed.
> 2017-01-28 13:42:57,295 ERROR [:blk_1073741855_1031] htrace.Tracer -
> Tried
> to detach trace span
> {"s":"5fbda99f463c751a","b":1485610977155,"e":1485610977223,"d":"dataStreamer","p":["77ba839770805ba6"]}
> but it has already been detached.
> 2017-01-28 13:42:57,296 WARN [:blk_1073741855_1031] hdfs.DFSClient -
> DataStreamer Exception
> java.lang.RuntimeException: Tried to detach trace span
> {"s":"5fbda99f463c751a","b":1485610977155,"e":1485610977223,"d":"dataStreamer","p":["77ba839770805ba6"]}
> but it has already been detached.
> at org.apache.htrace.Tracer.clientError(Tracer.java:60)
> at org.apache.htrace.TraceScope.detach(TraceScope.java:57)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:584)
> 2017-01-28 13:42:57,300 WARN [sync.0] hdfs.DFSClient - Error while
> syncing
> java.io.IOException: DataStreamer Exception:
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:655)
> Caused by: java.lang.RuntimeException: Tried to detach trace span
> {"s":"5fbda99f463c751a","b":1485610977155,"e":1485610977223,"d":"dataStreamer","p":["77ba839770805ba6"]}
> but it has already been detached.
> at org.apache.htrace.Tracer.clientError(Tracer.java:60)
> at org.apache.htrace.TraceScope.detach(TraceScope.java:57)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:584)
> 2017-01-28 13:42:57,300 ERROR [sync.0] wal.FSHLog - Error syncing,
> request
> close of WAL
> java.io.IOException: DataStreamer Exception:
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:655)
> Caused by: java.lang.RuntimeException: Tried to detach trace span
> {"s":"5fbda99f463c751a","b":1485610977155,"e":1485610977223,"d":"dataStreamer","p":["77ba839770805ba6"]}
> but it has already been detached.
> at org.apache.htrace.Tracer.clientError(Tracer.java:60)
> at org.apache.htrace.TraceScope.detach(TraceScope.java:57)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:584)
> 2017-01-28 13:42:57,301 DEBUG [9.18:16020.logRoller]
> regionserver.LogRoller
> - WAL roll requested
> 2017-01-28 13:42:57,302 ERROR [MemStoreFlusher.0]
> regionserver.MemStoreFlusher - Cache flush failed for region
> HT,EXCHG44:TOPIC79:00000000,1485610936283.4fa27d4c74b512d9611d48b9d241fc00.
> java.io.IOException: DataStreamer Exception:
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:655)
> Caused by: java.lang.RuntimeException: Tried to detach trace span
> {"s":"5fbda99f463c751a","b":1485610977155,"e":1485610977223,"d":"dataStreamer","p":["77ba839770805ba6"]}
> but it has already been detached.
> at org.apache.htrace.Tracer.clientError(Tracer.java:60)
> at org.apache.htrace.TraceScope.detach(TraceScope.java:57)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:584)
> 2017-01-28 13:42:57,550 ERROR [sync.1] wal.FSHLog - Error syncing,
> request
> close of WAL
> java.io.IOException: DataStreamer Exception:
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:655)
> Caused by: java.lang.RuntimeException: Tried to detach trace span
> {"s":"5fbda99f463c751a","b":1485610977155,"e":1485610977223,"d":"dataStreamer","p":["77ba839770805ba6"]}
> but it has already been detached.
> at org.apache.htrace.Tracer.clientError(Tracer.java:60)
> at org.apache.htrace.TraceScope.detach(TraceScope.java:57)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:584)
> 2017-01-28 13:42:57,551 ERROR [9.18:16020.logRoller] wal.FSHLog - Failed
> close of WAL writer
> hdfs://localhost:9000/hbase/WALs/localhost,16020,1485610340203/localhost%2C16020%2C1485610340203.default.1485610343936,
> unflushedEntries=2
> org.apache.hadoop.hbase.regionserver.wal.FailedSyncBeforeLogCloseException:
> java.io.IOException: DataStreamer Exception:
> at
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$SafePointZigZagLatch.waitSafePoint(FSHLog.java:1946)
> at
> org.apache.hadoop.hbase.regionserver.wal.FSHLog.replaceWriter(FSHLog.java:985)
> at
> org.apache.hadoop.hbase.regionserver.wal.FSHLog.rollWriter(FSHLog.java:739)
> at
> org.apache.hadoop.hbase.regionserver.LogRoller.run(LogRoller.java:148)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: DataStreamer Exception:
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:655)
> Caused by: java.lang.RuntimeException: Tried to detach trace span
> {"s":"5fbda99f463c751a","b":1485610977155,"e":1485610977223,"d":"dataStreamer","p":["77ba839770805ba6"]}
> but it has already been detached.
> at org.apache.htrace.Tracer.clientError(Tracer.java:60)
> at org.apache.htrace.TraceScope.detach(TraceScope.java:57)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:584)
>
>
> *For Same time , DataNode log*
>
>
> 2017-01-28 13:42:54,953 ERROR [blk_1073741899_1087]] datanode.DataNode -
> localhost:50010:DataXceiver error processing WRITE_BLOCK operation src:
> /host_.22:58408 dst: /host_.18:50010
> java.io.IOException: Premature EOF from inputStream
> at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:513)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:912)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:815)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:254)
> at java.lang.Thread.run(Thread.java:745)
> 2017-01-28 13:42:57,300 INFO [blk_1073741855_1031]] datanode.DataNode -
> Exception for BP-281614574-host_.16-1485609769526:blk_1073741855_1031
> java.io.IOException: Premature EOF from inputStream
> at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:513)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:912)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:815)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:254)
> at java.lang.Thread.run(Thread.java:745)
> 2017-01-28 13:42:57,300 INFO [WNSTREAM_IN_PIPELINE] datanode.DataNode -
> PacketResponder: BP-281614574-host_.16-1485609769526:blk_1073741855_1031,
> type=HAS_DOWNSTREAM_IN_PIPELINE: Thread is interrupted.
> 2017-01-28 13:42:57,300 INFO [WNSTREAM_IN_PIPELINE] datanode.DataNode -
> PacketResponder: BP-281614574-host_.16-1485609769526:blk_1073741855_1031,
> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2017-01-28 13:42:57,397 INFO [blk_1073741855_1031]] datanode.DataNode -
> opWriteBlock BP-281614574-host_.16-1485609769526:blk_1073741855_1031
> received exception java.io.IOException: Premature EOF from inputStream
> 2017-01-28 13:42:57,398 ERROR [blk_1073741855_1031]] datanode.DataNode -
> localhost:50010:DataXceiver error processing WRITE_BLOCK operation src:
> /host_.18:55410 dst: /host_.18:50010
> java.io.IOException: Premature EOF from inputStream
> at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:513)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:912)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:815)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:254)
> at java.lang.Thread.run(Thread.java:745)
> 2017-01-28 13:42:57,467 INFO [blk_1073741946_1137]] datanode.DataNode -
> Receiving BP-281614574-host_.16-1485609769526:blk_1073741946_1137 src:
> /host_.18:56726 dest: /host_.18:50010
> 2017-01-28 13:43:54,832 ERROR [ng for operation #1]] datanode.DataNode -
> localhost:50010:DataXceiver error processing unknown operation src:
> /host_138:46052 dst: /host_.18:50010
> java.io.EOFException
> at java.io.DataInputStream.readShort(DataInputStream.java:315)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:230)
> at java.lang.Thread.run(Thread.java:745)
> 2017-01-28 13:44:49,977 INFO [WNSTREAM_IN_PIPELINE] datanode.DataNode -
> Received BP-281614574-host_.16-1485609769526:blk_1073741938_1129 size 83
> from /host_.24:43496
>
>
>
> I am not able to understand where this error is starting, in HDFS layer
> or
> HTrace client side.
>
> Can someone point me to right direction?
>
> Thanks,
> Mihir Monani