You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flume.apache.org by Dinesh Narayanan <nd...@gmail.com> on 2015/11/19 06:26:09 UTC

Having issues with snappy compression

Hello,
I just started learning Flume. I get this error when i try to enable snappy
compression with Flume 1.6 on Mac OSX 10.9.5.

However, my other MR jobs with snappy compression work ok.

org.apache.flume.EventDeliveryException: java.lang.RuntimeException: native
snappy library not available: this version of libhadoop was built without
snappy support.
at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:463)
at
org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: native snappy library not available:
this version of libhadoop was built without snappy support.
at
org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:65)
at
org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:134)
at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:150)
at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:165)
at org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1201)
at org.apache.hadoop.io.SequenceFile$Writer.<init>(SequenceFile.java:1094)
at
org.apache.hadoop.io.SequenceFile$BlockCompressWriter.<init>(SequenceFile.java:1444)
at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:277)
at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:582)
at
org.apache.flume.sink.hdfs.HDFSSequenceFile.open(HDFSSequenceFile.java:98)
at
org.apache.flume.sink.hdfs.HDFSSequenceFile.open(HDFSSequenceFile.java:78)
at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:254)
at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:235)
at org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:679)
at
org.apache.flume.auth.SimpleAuthenticator.execute(SimpleAuthenticator.java:50)
at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:676)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

My sample test sink configuration is
agent1.sinks.sink1.type = hdfs
agent1.sinks.sink1.hdfs.path = /metrics/raw/flume/%Y/%m/%d/%H
agent1.sinks.sink1.hdfs.filePrefix = metrics
agent1.sinks.sink1.hdfs.fileSuffix = .seq
agent1.sinks.sink1.hdfs.fileType = SequenceFile
agent1.sinks.sink1.hdfs.writeFormat = Text
agent1.sinks.sink1.hdfs.rollInterval = 3
agent1.sinks.sink1.hdfs.rollSize = 1048576 # 1mb
agent1.sinks.sink1.hdfs.rollCount = 1000
agent1.sinks.sink1.hdfs.codeC = snappy

What could i be missing here.

Thanks
Dinesh