You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@mahout.apache.org by Xavier Rampino <xr...@senscritique.com> on 2015/05/07 17:31:40 UTC

Re: java.lang.UnsatisfiedLinkError: no snappyjava in java.library.path

Late to the party but I had the same problem and solved like that :


   1.

   Download libsnappyjava.jnilib
   2.

   Copy to a directory on the java.library.path, e.g. /usr/lib/java/
   3.

   Rename to libsnappyjava.dylib


On Wed, Jan 28, 2015 at 1:07 AM, Dmitriy Lyubimov <dl...@gmail.com> wrote:

> This looks like hadoop or spark -specific thing (snappy codec is used by
> spark by default). There should be a way to disable this to a more
> palatable library but you will need to investigate it a little bit since i
> don't think anybody here knows mac specifics.
>
> Better yet is to figure how to install native snappy codec on your Mac.
> There should be a way. Ask on spark list.
>
> To switch codec to something else you may try to get something like this
> into system properties of the driver process:
> -Dspark.io.compression.codec=lzf
>
> Now, normally it would be as easy as adding it to MAHOUT_OPTS environment;
> however, i think our current head is broken w.r.t. MAHOUT_OPTS for spark
> processes (i have a fix for it elsewhere but not in public branch). So if
> you decide to switch codec, you may need hack bin/mahout script a little,
> not sure.
>
>
>
>
>
> On Tue, Jan 27, 2015 at 3:07 PM, Kevin Zhang <
> zhangyongjiang@yahoo.com.invalid> wrote:
>
> > Thanks to Dmitriy for answering my previous question regarding the Spark
> > version. I just downgraded the version to spark-1.1.0-bin-hadoop2.4 and
> run
> > my commane "mahout spark-itemsimilarity -i ./mahout-input/order_item.tsv
> -o
> > ./output -f1 purchase -f2 view -os -ic 2 -fc 1 -td ," again. This time I
> > got error "java.lang.UnsatisfiedLinkError: no snappyjava in
> > java.library.path" as attached. I'm using Mac.
> >
> > Thanks for the help
> > -Kevin
> >
> >
> > at
> >
> org.apache.spark.shuffle.hash.HashShuffleWriter.write(HashShuffleWriter.scala:65)
> >
> > at
> >
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
> > at
> >
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
> > at org.apache.spark.scheduler.Task.run(Task.scala:54)
> > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > at java.lang.Thread.run(Thread.java:745)
> > Caused by: java.lang.UnsatisfiedLinkError: no snappyjava in
> > java.library.path
> > at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1886)
> > at java.lang.Runtime.loadLibrary0(Runtime.java:849)
> > at java.lang.System.loadLibrary(System.java:1088)
> > at
> >
> org.xerial.snappy.SnappyNativeLoader.loadLibrary(SnappyNativeLoader.java:52)
> > ... 26 more
> > 15/01/27 14:54:24 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID
> > 0)
> > org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null
> > at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:229)
> > at org.xerial.snappy.Snappy.<clinit>(Snappy.java:44)
> > at
> org.xerial.snappy.SnappyOutputStream.<init>(SnappyOutputStream.java:79)
> > at
> >
> org.apache.spark.io.SnappyCompressionCodec.compressedOutputStream(CompressionCodec.scala:125)
> > at
> >
> org.apache.spark.storage.BlockManager.wrapForCompression(BlockManager.scala:1029)
> > at
> >
> org.apache.spark.storage.BlockManager$$anonfun$8.apply(BlockManager.scala:608)
> > at
> >
> org.apache.spark.storage.BlockManager$$anonfun$8.apply(BlockManager.scala:608)
>