You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flume.apache.org by ch huang <ju...@gmail.com> on 2013/12/28 04:02:46 UTC

issue about "HDFS IO error" in flume log

hi,maillist:

         i get following info when use flume ,and i checked
HDFSDataStream.java:80
,it's says

    boolean appending = false;
    if (conf.getBoolean("hdfs.append.support", false) == true && hdfs.isFile
            (dstPath)) {
      outStream = hdfs.append(dstPath);
      appending = true;
    } else {
80:      outStream = hdfs.create(dstPath);
    }

and i also checked the HDFS code , in org.apache.hadoop.hdfs.DFSConfigKeys
file i can not search "hdfs.append.support",instead of this

public static final String  DFS_SUPPORT_APPEND_KEY = "dfs.support.append";

so if flume broke when append content to a file ,when it restart, the code
in if statement will never be executed,that's a bug !!


25 Dec 2013 05:34:23,119 INFO
[SinkRunner-PollingRunner-DefaultSinkProcessor]
(org.apache.flume.sink.hdfs.BucketWriter.open:219)  - Creating
/user/hive/warehouse/adx.db/ssp_res

ponse/2013-12-23/.FlumeData.1387777961215.tmp

25 Dec 2013 05:34:23,119 WARN
[SinkRunner-PollingRunner-DefaultSinkProcessor]
(org.apache.flume.sink.hdfs.HDFSEventSink.process:418)  - HDFS IO error

java.io.IOException: Filesystem closed

        at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:565)

        at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1247)

        at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1212)

        at
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:276)

        at
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:265)

        at
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:82)

        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:886)

        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:867)

        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:766)

        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:755)

        at
org.apache.flume.sink.hdfs.HDFSDataStream.open(HDFSDataStream.java:80)

        at
org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:227)

        at
org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:220)

        at
org.apache.flume.sink.hdfs.BucketWriter$8$1.run(BucketWriter.java:536)

        at
org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:160)

        at
org.apache.flume.sink.hdfs.BucketWriter.access$1000(BucketWriter.java:56)

        at
org.apache.flume.sink.hdfs.BucketWriter$8.call(BucketWriter.java:533)

        at java.util.concurrent.FutureTask.run(FutureTask.java:262)

        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

        at java.lang.Thread.run(Thread.java:744)