You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by Divya R <av...@gmail.com> on 2013/07/19 09:20:13 UTC

Exception while syncing from Flume to HDFS

I'm running hadoop 1.2.0 and flume 1.3. Every thing works fine if its
independently run. When I start my tomcat I get the below exception after
some time.

  2013-07-17 12:40:35,640 (ResponseProcessor for block
blk_5249456272858461891_436734) [WARN -
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:3015)]
DFSOutputStream ResponseProcessor exception  for block
blk_5249456272858461891_436734java.net.SocketTimeoutException: 63000
millis timeout while waiting for channel to be ready for read. ch :
java.nio.channels.SocketChannel[connected local=/127.0.0.1:24433
remote=/127.0.0.1:50010]
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
at java.io.DataInputStream.readFully(DataInputStream.java:195)
at java.io.DataInputStream.readLong(DataInputStream.java:416)
at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.readFields(DataTransferProtocol.java:124)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2967)

     2013-07-17 12:40:35,800 (hdfs-hdfs-write-roll-timer-0) [WARN -
org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:277)]
failed to close() HDFSWriter for file
(hdfs://localhost:9000/flume/Broadsoft_App2/20130717/jboss/Broadsoft_App2.1374044838498.tmp).
Exception follows.
java.io.IOException: All datanodes 127.0.0.1:50010 are bad. Aborting...
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3096)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2100(DFSClient.java:2589)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2793)


  Java snippet for Configuraion

configuration.set("fs.default.name", "hdfs://localhost:9000");
configuration.set("mapred.job.tracker", "hdfs://localhost:9000");

I'm using a single datanode to read the files that where written to hdfs by
flume, my java program just reads the files from hdfs to show it on the
screen nothing much. Any sort of help is highly appreciated.


Regards,
Divya

Re: Exception while syncing from Flume to HDFS

Posted by Azuryy Yu <az...@gmail.com>.
hi,

this is not HDFS issue.
you can put your question in the flume ml.
 On Jul 19, 2013 3:20 PM, "Divya R" <av...@gmail.com> wrote:

> I'm running hadoop 1.2.0 and flume 1.3. Every thing works fine if its
> independently run. When I start my tomcat I get the below exception after
> some time.
>
>   2013-07-17 12:40:35,640 (ResponseProcessor for block
> blk_5249456272858461891_436734) [WARN -
>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:3015)]
> DFSOutputStream ResponseProcessor exception  for block
> blk_5249456272858461891_436734java.net.SocketTimeoutException: 63000
> millis timeout while waiting for channel to be ready for read. ch :
> java.nio.channels.SocketChannel[connected local=/127.0.0.1:24433
> remote=/127.0.0.1:50010]
> at
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> at java.io.DataInputStream.readFully(DataInputStream.java:195)
> at java.io.DataInputStream.readLong(DataInputStream.java:416)
> at
> org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.readFields(DataTransferProtocol.java:124)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2967)
>
>      2013-07-17 12:40:35,800 (hdfs-hdfs-write-roll-timer-0) [WARN -
> org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:277)]
> failed to close() HDFSWriter for file
>
> (hdfs://localhost:9000/flume/Broadsoft_App2/20130717/jboss/Broadsoft_App2.1374044838498.tmp).
> Exception follows.
> java.io.IOException: All datanodes 127.0.0.1:50010 are bad. Aborting...
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3096)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2100(DFSClient.java:2589)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2793)
>
>
>   Java snippet for Configuraion
>
> configuration.set("fs.default.name", "hdfs://localhost:9000");
> configuration.set("mapred.job.tracker", "hdfs://localhost:9000");
>
> I'm using a single datanode to read the files that where written to hdfs by
> flume, my java program just reads the files from hdfs to show it on the
> screen nothing much. Any sort of help is highly appreciated.
>
>
> Regards,
> Divya
>