You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flume.apache.org by Zhishan Li <zh...@gmail.com> on 2015/11/14 17:41:54 UTC

Failure to upload data into s3 sink.

Hi all,

I want upload those collected data into amazon s3 cluster. But I encounter the below when  I set

agent.sinks.k1.hdfs.path = s3://bigdata/test <s3://bigdata/test>


14 Nov 2015 16:26:01,518 INFO  [hdfs-k1-call-runner-0] (org.apache.flume.sink.hdfs.AbstractHDFSWriter.reflectGetNumCurrentReplicas:188)  - FileSystem's output stream doesn't support getNumCurrentReplicas; --HDFS-826 not available; fsOut=com.amazon.ws.emr.hadoop.fs.s3n.MultipartUploadOutputStream; err=java.lang.NoSuchMethodException: com.amazon.ws.emr.hadoop.fs.s3n.MultipartUploadOutputStream.getNumCurrentReplicas()
14 Nov 2015 16:26:01,518 INFO  [hdfs-k1-call-runner-0] (org.apache.flume.sink.hdfs.AbstractHDFSWriter.reflectGetNumCurrentReplicas:188)  - FileSystem's output stream doesn't support getNumCurrentReplicas; --HDFS-826 not available; fsOut=com.amazon.ws.emr.hadoop.fs.s3n.MultipartUploadOutputStream; err=java.lang.NoSuchMethodException: com.amazon.ws.emr.hadoop.fs.s3n.MultipartUploadOutputStream.getNumCurrentReplicas()
14 Nov 2015 16:26:01,518 WARN  [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.BucketWriter.getRefIsClosed:183)  - isFileClosed is not available in the version of HDFS being used. Flume will not attempt to close files if the close fails on the first attempt
java.lang.NoSuchMethodException: com.amazon.ws.emr.hadoop.fs.EmrFileSystem.isFileClosed(org.apache.hadoop.fs.Path)
	at java.lang.Class.getMethod(Class.java:1665)
	at org.apache.flume.sink.hdfs.BucketWriter.getRefIsClosed(BucketWriter.java:180)
	at org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:268)
	at org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:514)
	at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:418)
	at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
	at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
	at java.lang.Thread.run(Thread.java:745)


I have found the jira issue about it: https://issues.apache.org/jira/browse/FLUME-2427 <https://issues.apache.org/jira/browse/FLUME-2427>

But i don’t find a solution to fix the WARN issue and i can’t find the  com.amazon.ws.emr.hadoop.fs.EmrFileSystem API.

Please help me fix it.

Thanks

Re: Failure to upload data into s3 sink.

Posted by Zhishan Li <zh...@gmail.com>.
The result is: the temporal file will not be rename to a final name by strip off “.tmp”, even the agent is killed.
 

> On 15 Nov, 2015, at 12:41 am, Zhishan Li <zh...@gmail.com> wrote:
> 
> 
> Hi all,
> 
> I want upload those collected data into amazon s3 cluster. But I encounter the below when  I set
> 
> agent.sinks.k1.hdfs.path = s3://bigdata/test <s3://bigdata/test>
> 
> 
> 14 Nov 2015 16:26:01,518 INFO  [hdfs-k1-call-runner-0] (org.apache.flume.sink.hdfs.AbstractHDFSWriter.reflectGetNumCurrentReplicas:188)  - FileSystem's output stream doesn't support getNumCurrentReplicas; --HDFS-826 not available; fsOut=com.amazon.ws.emr.hadoop.fs.s3n.MultipartUploadOutputStream; err=java.lang.NoSuchMethodException: com.amazon.ws.emr.hadoop.fs.s3n.MultipartUploadOutputStream.getNumCurrentReplicas()
> 14 Nov 2015 16:26:01,518 INFO  [hdfs-k1-call-runner-0] (org.apache.flume.sink.hdfs.AbstractHDFSWriter.reflectGetNumCurrentReplicas:188)  - FileSystem's output stream doesn't support getNumCurrentReplicas; --HDFS-826 not available; fsOut=com.amazon.ws.emr.hadoop.fs.s3n.MultipartUploadOutputStream; err=java.lang.NoSuchMethodException: com.amazon.ws.emr.hadoop.fs.s3n.MultipartUploadOutputStream.getNumCurrentReplicas()
> 14 Nov 2015 16:26:01,518 WARN  [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.BucketWriter.getRefIsClosed:183)  - isFileClosed is not available in the version of HDFS being used. Flume will not attempt to close files if the close fails on the first attempt
> java.lang.NoSuchMethodException: com.amazon.ws.emr.hadoop.fs.EmrFileSystem.isFileClosed(org.apache.hadoop.fs.Path)
> 	at java.lang.Class.getMethod(Class.java:1665)
> 	at org.apache.flume.sink.hdfs.BucketWriter.getRefIsClosed(BucketWriter.java:180)
> 	at org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:268)
> 	at org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:514)
> 	at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:418)
> 	at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
> 	at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
> 	at java.lang.Thread.run(Thread.java:745)
> 
> 
> I have found the jira issue about it: https://issues.apache.org/jira/browse/FLUME-2427 <https://issues.apache.org/jira/browse/FLUME-2427>
> 
> But i don’t find a solution to fix the WARN issue and i can’t find the  com.amazon.ws.emr.hadoop.fs.EmrFileSystem API.
> 
> Please help me fix it.
> 
> Thanks