You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Uthayan Suthakar <ut...@gmail.com> on 2016/10/04 19:59:20 UTC

native snappy library not available: this version of libhadoop was built without snappy support.

Hello guys,

I have a job that reads compressed (Snappy) data but when I run the job, it
is throwing an error "native snappy library not available: this version
of libhadoop was built without snappy support".
.
I followed this instruction but it did not resolve the issue:
https://community.hortonworks.com/questions/18903/this-
version-of-libhadoop-was-built-without-snappy.html

The check native command show that snappy is installed.
hadoop checknative
16/10/04 21:01:30 INFO bzip2.Bzip2Factory: Successfully loaded &
initialized native-bzip2 library system-native
16/10/04 21:01:30 INFO zlib.ZlibFactory: Successfully loaded & initialized
native-zlib library
Native library checking:
hadoop:  true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
zlib:    true /lib64/libz.so.1
snappy:  true /usr/lib/hadoop/lib/native/libsnappy.so.1
lz4:     true revision:99
bzip2:   true /lib64/libbz2.so.1
openssl: true /usr/lib64/libcrypto.so

I also have a code in the job to check whether native snappy is loaded,
which is returning true.

Now, I have no idea why I'm getting this error. Also, I had no issue
reading Snappy data using MapReduce job on the same cluster, Could anyone
tell me what is wrong?



Thank you.

Stack:


java.lang.RuntimeException: native snappy library not available: this
version of libhadoop was built without snappy support.
        at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(
SnappyCodec.java:65)
        at org.apache.hadoop.io.compress.SnappyCodec.getDecompressorType(
SnappyCodec.java:193)
        at org.apache.hadoop.io.compress.CodecPool.getDecompressor(
CodecPool.java:178)
        at org.apache.hadoop.mapred.LineRecordReader.<init>(
LineRecordReader.java:111)
        at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(
TextInputFormat.java:67)
        at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(
HadoopRDD.scala:237)
        at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208)
        at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(
MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(
MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(
MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(
ShuffleMapTask.scala:73)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(
ShuffleMapTask.scala:41)
        at org.apache.spark.scheduler.Task.run(Task.scala:89)
        at org.apache.spark.executor.Executor$TaskRunner.run(
Executor.scala:214)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

Re: native snappy library not available: this version of libhadoop was built without snappy support.

Posted by Wei-Chiu Chuang <we...@cloudera.com>.
It seems to me this issue is the direct result of MAPREDUCE-6577 <https://issues.apache.org/jira/browse/MAPREDUCE-6577>
Since you’re on a CDH cluster, I would suggest you to move up to CDH5.7.2 or above where this bug is fixed.

Best,
Wei-Chiu Chuang

> On Oct 4, 2016, at 1:26 PM, Wei-Chiu Chuang <we...@cloudera.com> wrote:
> 
> I see. Sorry for the confusion.
> 
> It seems to me the warning message a bit misleading. This message may also be printed if libhadoop can not be loaded for any reason.
> Can you turn on debug log and see if the log contains either "Loaded the native-hadoop library” or "Failed to load native-hadoop with error”?
> 
> 
> Wei-Chiu Chuang
> 
>> On Oct 4, 2016, at 1:12 PM, Uthayan Suthakar <uthayan.suthakar@gmail.com <ma...@gmail.com>> wrote:
>> 
>> Hi Wei-Chiu,
>> 
>> My Hadoop version is Hadoop 2.6.0-cdh5.7.0.
>> 
>> But when I checked the native, it shows that it is installed:
>> 
>> hadoop checknative
>> 16/10/04 21:01:30 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native
>> 16/10/04 21:01:30 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
>> Native library checking:
>> hadoop:  true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
>> zlib:    true /lib64/libz.so.1
>> snappy:  true /usr/lib/hadoop/lib/native/libsnappy.so.1
>> lz4:     true revision:99
>> bzip2:   true /lib64/libbz2.so.1
>> openssl: true /usr/lib64/libcrypto.so
>> 
>> Thanks.
>> 
>> Uthay
>> 
>> 
>> On 4 October 2016 at 21:05, Wei-Chiu Chuang <weichiu@cloudera.com <ma...@cloudera.com>> wrote:
>> Hi Uthayan,
>> what’s the version of Hadoop you have? Hadoop 2.7.3 binary does not ship with snappy precompiled. If this is the version you have you may have to rebuild Hadoop yourself to include it.
>> 
>> Wei-Chiu Chuang
>> 
>>> On Oct 4, 2016, at 12:59 PM, Uthayan Suthakar <uthayan.suthakar@gmail.com <ma...@gmail.com>> wrote:
>>> 
>>> Hello guys,
>>> 
>>> I have a job that reads compressed (Snappy) data but when I run the job, it is throwing an error "native snappy library not available: this version of libhadoop was built without snappy support".
>>> .  
>>> I followed this instruction but it did not resolve the issue:
>>> https://community.hortonworks.com/questions/18903/this-version-of-libhadoop-was-built-without-snappy.html <https://community.hortonworks.com/questions/18903/this-version-of-libhadoop-was-built-without-snappy.html>
>>> 
>>> The check native command show that snappy is installed.
>>> hadoop checknative
>>> 16/10/04 21:01:30 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native
>>> 16/10/04 21:01:30 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
>>> Native library checking:
>>> hadoop:  true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
>>> zlib:    true /lib64/libz.so.1
>>> snappy:  true /usr/lib/hadoop/lib/native/libsnappy.so.1
>>> lz4:     true revision:99
>>> bzip2:   true /lib64/libbz2.so.1
>>> openssl: true /usr/lib64/libcrypto.so
>>> 
>>> I also have a code in the job to check whether native snappy is loaded, which is returning true.
>>> 
>>> Now, I have no idea why I'm getting this error. Also, I had no issue reading Snappy data using MapReduce job on the same cluster, Could anyone tell me what is wrong?
>>> 
>>> 
>>> 
>>> Thank you.
>>> 
>>> Stack:
>>> 
>>> 
>>> java.lang.RuntimeException: native snappy library not available: this version of libhadoop was built without snappy support.
>>>         at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:65)
>>>         at org.apache.hadoop.io.compress.SnappyCodec.getDecompressorType(SnappyCodec.java:193)
>>>         at org.apache.hadoop.io.compress.CodecPool.getDecompressor(CodecPool.java:178)
>>>         at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:111)
>>>         at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
>>>         at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:237)
>>>         at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208)
>>>         at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
>>>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>>>         at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>>>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>>>         at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>>>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>>>         at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>>>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>>>         at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
>>>         at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>>>         at org.apache.spark.scheduler.Task.run(Task.scala:89)
>>>         at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>>>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>         at java.lang.Thread.run(Thread.java:745)
>> 
>> 
> 


Re: native snappy library not available: this version of libhadoop was built without snappy support.

Posted by Wei-Chiu Chuang <we...@cloudera.com>.
I see. Sorry for the confusion.

It seems to me the warning message a bit misleading. This message may also be printed if libhadoop can not be loaded for any reason.
Can you turn on debug log and see if the log contains either "Loaded the native-hadoop library” or "Failed to load native-hadoop with error”?


Wei-Chiu Chuang

> On Oct 4, 2016, at 1:12 PM, Uthayan Suthakar <ut...@gmail.com> wrote:
> 
> Hi Wei-Chiu,
> 
> My Hadoop version is Hadoop 2.6.0-cdh5.7.0.
> 
> But when I checked the native, it shows that it is installed:
> 
> hadoop checknative
> 16/10/04 21:01:30 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native
> 16/10/04 21:01:30 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
> Native library checking:
> hadoop:  true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
> zlib:    true /lib64/libz.so.1
> snappy:  true /usr/lib/hadoop/lib/native/libsnappy.so.1
> lz4:     true revision:99
> bzip2:   true /lib64/libbz2.so.1
> openssl: true /usr/lib64/libcrypto.so
> 
> Thanks.
> 
> Uthay
> 
> 
> On 4 October 2016 at 21:05, Wei-Chiu Chuang <weichiu@cloudera.com <ma...@cloudera.com>> wrote:
> Hi Uthayan,
> what’s the version of Hadoop you have? Hadoop 2.7.3 binary does not ship with snappy precompiled. If this is the version you have you may have to rebuild Hadoop yourself to include it.
> 
> Wei-Chiu Chuang
> 
>> On Oct 4, 2016, at 12:59 PM, Uthayan Suthakar <uthayan.suthakar@gmail.com <ma...@gmail.com>> wrote:
>> 
>> Hello guys,
>> 
>> I have a job that reads compressed (Snappy) data but when I run the job, it is throwing an error "native snappy library not available: this version of libhadoop was built without snappy support".
>> .  
>> I followed this instruction but it did not resolve the issue:
>> https://community.hortonworks.com/questions/18903/this-version-of-libhadoop-was-built-without-snappy.html <https://community.hortonworks.com/questions/18903/this-version-of-libhadoop-was-built-without-snappy.html>
>> 
>> The check native command show that snappy is installed.
>> hadoop checknative
>> 16/10/04 21:01:30 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native
>> 16/10/04 21:01:30 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
>> Native library checking:
>> hadoop:  true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
>> zlib:    true /lib64/libz.so.1
>> snappy:  true /usr/lib/hadoop/lib/native/libsnappy.so.1
>> lz4:     true revision:99
>> bzip2:   true /lib64/libbz2.so.1
>> openssl: true /usr/lib64/libcrypto.so
>> 
>> I also have a code in the job to check whether native snappy is loaded, which is returning true.
>> 
>> Now, I have no idea why I'm getting this error. Also, I had no issue reading Snappy data using MapReduce job on the same cluster, Could anyone tell me what is wrong?
>> 
>> 
>> 
>> Thank you.
>> 
>> Stack:
>> 
>> 
>> java.lang.RuntimeException: native snappy library not available: this version of libhadoop was built without snappy support.
>>         at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:65)
>>         at org.apache.hadoop.io.compress.SnappyCodec.getDecompressorType(SnappyCodec.java:193)
>>         at org.apache.hadoop.io.compress.CodecPool.getDecompressor(CodecPool.java:178)
>>         at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:111)
>>         at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
>>         at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:237)
>>         at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208)
>>         at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
>>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>>         at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>>         at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>>         at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>>         at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
>>         at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>>         at org.apache.spark.scheduler.Task.run(Task.scala:89)
>>         at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>         at java.lang.Thread.run(Thread.java:745)
> 
> 


Re: native snappy library not available: this version of libhadoop was built without snappy support.

Posted by Uthayan Suthakar <ut...@gmail.com>.
Hi Wei-Chiu,

My Hadoop version is Hadoop 2.6.0-cdh5.7.0.

But when I checked the native, it shows that it is installed:

hadoop checknative
16/10/04 21:01:30 INFO bzip2.Bzip2Factory: Successfully loaded &
initialized native-bzip2 library system-native
16/10/04 21:01:30 INFO zlib.ZlibFactory: Successfully loaded & initialized
native-zlib library
Native library checking:
hadoop:  true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
zlib:    true /lib64/libz.so.1
snappy:  true /usr/lib/hadoop/lib/native/libsnappy.so.1
lz4:     true revision:99
bzip2:   true /lib64/libbz2.so.1
openssl: true /usr/lib64/libcrypto.so

Thanks.

Uthay


On 4 October 2016 at 21:05, Wei-Chiu Chuang <we...@cloudera.com> wrote:

> Hi Uthayan,
> what’s the version of Hadoop you have? Hadoop 2.7.3 binary does not ship
> with snappy precompiled. If this is the version you have you may have to
> rebuild Hadoop yourself to include it.
>
> Wei-Chiu Chuang
>
> On Oct 4, 2016, at 12:59 PM, Uthayan Suthakar <ut...@gmail.com>
> wrote:
>
> Hello guys,
>
> I have a job that reads compressed (Snappy) data but when I run the job,
> it is throwing an error "native snappy library not available: this version
> of libhadoop was built without snappy support".
> .
> I followed this instruction but it did not resolve the issue:
> https://community.hortonworks.com/questions/18903/this-versi
> on-of-libhadoop-was-built-without-snappy.html
>
> The check native command show that snappy is installed.
> hadoop checknative
> 16/10/04 21:01:30 INFO bzip2.Bzip2Factory: Successfully loaded &
> initialized native-bzip2 library system-native
> 16/10/04 21:01:30 INFO zlib.ZlibFactory: Successfully loaded & initialized
> native-zlib library
> Native library checking:
> hadoop:  true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
> zlib:    true /lib64/libz.so.1
> snappy:  true /usr/lib/hadoop/lib/native/libsnappy.so.1
> lz4:     true revision:99
> bzip2:   true /lib64/libbz2.so.1
> openssl: true /usr/lib64/libcrypto.so
>
> I also have a code in the job to check whether native snappy is loaded,
> which is returning true.
>
> Now, I have no idea why I'm getting this error. Also, I had no issue
> reading Snappy data using MapReduce job on the same cluster, Could anyone
> tell me what is wrong?
>
>
>
> Thank you.
>
> Stack:
>
>
> java.lang.RuntimeException: native snappy library not available: this
> version of libhadoop was built without snappy support.
>         at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoa
> ded(SnappyCodec.java:65)
>         at org.apache.hadoop.io.compress.SnappyCodec.getDecompressorTyp
> e(SnappyCodec.java:193)
>         at org.apache.hadoop.io.compress.CodecPool.getDecompressor(Code
> cPool.java:178)
>         at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordR
> eader.java:111)
>         at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(Tex
> tInputFormat.java:67)
>         at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.
> scala:237)
>         at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208)
>         at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>         at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsR
> DD.scala:38)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>         at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsR
> DD.scala:38)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>         at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsR
> DD.scala:38)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>         at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMap
> Task.scala:73)
>         at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMap
> Task.scala:41)
>         at org.apache.spark.scheduler.Task.run(Task.scala:89)
>         at org.apache.spark.executor.Executor$TaskRunner.run(Executor.
> scala:214)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
> Executor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
> lExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>

Re: native snappy library not available: this version of libhadoop was built without snappy support.

Posted by Wei-Chiu Chuang <we...@cloudera.com>.
Hi Uthayan,
what’s the version of Hadoop you have? Hadoop 2.7.3 binary does not ship with snappy precompiled. If this is the version you have you may have to rebuild Hadoop yourself to include it.

Wei-Chiu Chuang

> On Oct 4, 2016, at 12:59 PM, Uthayan Suthakar <ut...@gmail.com> wrote:
> 
> Hello guys,
> 
> I have a job that reads compressed (Snappy) data but when I run the job, it is throwing an error "native snappy library not available: this version of libhadoop was built without snappy support".
> .  
> I followed this instruction but it did not resolve the issue:
> https://community.hortonworks.com/questions/18903/this-version-of-libhadoop-was-built-without-snappy.html <https://community.hortonworks.com/questions/18903/this-version-of-libhadoop-was-built-without-snappy.html>
> 
> The check native command show that snappy is installed.
> hadoop checknative
> 16/10/04 21:01:30 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native
> 16/10/04 21:01:30 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
> Native library checking:
> hadoop:  true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
> zlib:    true /lib64/libz.so.1
> snappy:  true /usr/lib/hadoop/lib/native/libsnappy.so.1
> lz4:     true revision:99
> bzip2:   true /lib64/libbz2.so.1
> openssl: true /usr/lib64/libcrypto.so
> 
> I also have a code in the job to check whether native snappy is loaded, which is returning true.
> 
> Now, I have no idea why I'm getting this error. Also, I had no issue reading Snappy data using MapReduce job on the same cluster, Could anyone tell me what is wrong?
> 
> 
> 
> Thank you.
> 
> Stack:
> 
> 
> java.lang.RuntimeException: native snappy library not available: this version of libhadoop was built without snappy support.
>         at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:65)
>         at org.apache.hadoop.io.compress.SnappyCodec.getDecompressorType(SnappyCodec.java:193)
>         at org.apache.hadoop.io.compress.CodecPool.getDecompressor(CodecPool.java:178)
>         at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:111)
>         at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
>         at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:237)
>         at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208)
>         at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>         at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>         at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>         at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>         at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
>         at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>         at org.apache.spark.scheduler.Task.run(Task.scala:89)
>         at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)