You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Alan Burlison <Al...@oracle.com> on 2014/03/06 18:04:43 UTC

Building spark with native library support

Hi,

I've successfully built 0.9.0-incubating on Solaris using sbt, following 
the instructions at http://spark.incubator.apache.org/docs/latest/ and 
it seems to work OK. However, when I start it up I get an error about 
missing Hadoop native libraries. I can't find any mention of how to 
build the native components in the instructions, how is that done?

Thanks,

-- 
Alan Burlison
--

RE: Building spark with native library support

Posted by "Jeyaraj, Arockia R (Arockia)" <ar...@verizon.com>.
Hi,

I am trying to setup Spark in windows for development environment. I get following error when I run sbt. Pl help me to resolve this issue. I am working for Verizon and am in my company network and can't access internet without proxy.

C:\Users>sbt
Getting org.fusesource.jansi jansi 1.11 ...
You probably access the destination server through a proxy server that is not we
ll configured.
You probably access the destination server through a proxy server that is not we
ll configured.
You probably access the destination server through a proxy server that is not we
ll configured.

:: problems summary ::
:::: WARNINGS
        Host repo.typesafe.com not found. url=http://repo.typesafe.com/typesafe/
ivy-releases/org.fusesource.jansi/jansi/1.11/ivys/ivy.xml

        Host repo1.maven.org not found. url=http://repo1.maven.org/maven2/org/fu
sesource/jansi/jansi/1.11/jansi-1.11.pom

        Host repo1.maven.org not found. url=http://repo1.maven.org/maven2/org/fu
sesource/jansi/jansi/1.11/jansi-1.11.jar

                module not found: org.fusesource.jansi#jansi;1.11

        ==== local: tried

          C:\Users\v983654\.ivy2\local\org.fusesource.jansi\jansi\1.11\ivys\ivy.
xml

          -- artifact org.fusesource.jansi#jansi;1.11!jansi.jar:

          C:\Users\v983654\.ivy2\local\org.fusesource.jansi\jansi\1.11\jars\jans
i.jar

        ==== typesafe-ivy-releases: tried

          http://repo.typesafe.com/typesafe/ivy-releases/org.fusesource.jansi/ja
nsi/1.11/ivys/ivy.xml

        ==== Maven Central: tried

          http://repo1.maven.org/maven2/org/fusesource/jansi/jansi/1.11/jansi-1.
11.pom

          -- artifact org.fusesource.jansi#jansi;1.11!jansi.jar:

          http://repo1.maven.org/maven2/org/fusesource/jansi/jansi/1.11/jansi-1.
11.jar

                ::::::::::::::::::::::::::::::::::::::::::::::

                ::          UNRESOLVED DEPENDENCIES         ::

                ::::::::::::::::::::::::::::::::::::::::::::::

                :: org.fusesource.jansi#jansi;1.11: not found

                ::::::::::::::::::::::::::::::::::::::::::::::



:: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS
unresolved dependency: org.fusesource.jansi#jansi;1.11: not found
Error during sbt execution: Error retrieving required libraries
  (see C:\Users\v983654\.sbt\boot\update.log for complete log)
Error: Could not retrieve jansi 1.11

Thanks
Arockia Raja

-----Original Message-----
From: Matei Zaharia [mailto:matei.zaharia@gmail.com] 
Sent: Thursday, March 06, 2014 11:44 AM
To: user@spark.apache.org
Subject: Re: Building spark with native library support

Is it an error, or just a warning? In any case, you need to get those libraries from a build of Hadoop for your platform. Then add them to the SPARK_LIBRARY_PATH environment variable in conf/spark-env.sh, or to your -Djava.library.path if launching an application separately.

These libraries just speed up some compression codecs BTW, so it should be fine to run without them too.

Matei

On Mar 6, 2014, at 9:04 AM, Alan Burlison <Al...@oracle.com> wrote:

> Hi,
> 
> I've successfully built 0.9.0-incubating on Solaris using sbt, following the instructions at http://spark.incubator.apache.org/docs/latest/ and it seems to work OK. However, when I start it up I get an error about missing Hadoop native libraries. I can't find any mention of how to build the native components in the instructions, how is that done?
> 
> Thanks,
> 
> -- 
> Alan Burlison
> --


Re: Building spark with native library support

Posted by Alan Burlison <Al...@oracle.com>.
On 06/03/2014 18:55, Matei Zaharia wrote:

> For the native libraries, you can use an existing Hadoop build and
> just put them on the path. For linking to Hadoop, Spark grabs it
> through Maven, but you can do "mvn install" locally on your version
> of Hadoop to install it to your local Maven cache, and then configure
> Spark to use that version. Spark never builds Hadoop itself, it just
> downloads it through Maven.

OK, thanks for the pointers.

-- 
Alan Burlison
--

Re: Building spark with native library support

Posted by Matei Zaharia <ma...@gmail.com>.
For the native libraries, you can use an existing Hadoop build and just put them on the path. For linking to Hadoop, Spark grabs it through Maven, but you can do "mvn install" locally on your version of Hadoop to install it to your local Maven cache, and then configure Spark to use that version. Spark never builds Hadoop itself, it just downloads it through Maven.

Matei

On Mar 6, 2014, at 10:38 AM, Alan Burlison <Al...@oracle.com> wrote:

> On 06/03/2014 17:44, Matei Zaharia wrote:
> 
>> Is it an error, or just a warning? In any case, you need to get those
>> libraries from a build of Hadoop for your platform. Then add them to
>> the SPARK_LIBRARY_PATH environment variable in conf/spark-env.sh, or
>> to your -Djava.library.path if launching an application separately.
> 
> OK, thanks. Is it possible to get Spark to build using an existing Hadoop build tree, or does Spark insist on building its own Hadoop? The instructions at https://spark.incubator.apache.org/docs/latest/ seem to suggest that it always builds its own Hadoop version.
> 
> I may also have to fiddle with Hadoop to get it to build on Solaris if the instructions at http://www.oracle.com/technetwork/articles/servers-storage-admin/sol-howto-native-hadoop-s11-1946524.html are still relevant.
> 
>> These libraries just speed up some compression codecs BTW, so it
>> should be fine to run without them too.
> 
> Yes, it works as-is but I have a need for speed :-)
> 
> Thanks,
> 
> -- 
> Alan Burlison
> --


Re: Building spark with native library support

Posted by Alan Burlison <Al...@oracle.com>.
On 06/03/2014 17:44, Matei Zaharia wrote:

> Is it an error, or just a warning? In any case, you need to get those
> libraries from a build of Hadoop for your platform. Then add them to
> the SPARK_LIBRARY_PATH environment variable in conf/spark-env.sh, or
> to your -Djava.library.path if launching an application separately.

OK, thanks. Is it possible to get Spark to build using an existing 
Hadoop build tree, or does Spark insist on building its own Hadoop? The 
instructions at https://spark.incubator.apache.org/docs/latest/ seem to 
suggest that it always builds its own Hadoop version.

I may also have to fiddle with Hadoop to get it to build on Solaris if 
the instructions at 
http://www.oracle.com/technetwork/articles/servers-storage-admin/sol-howto-native-hadoop-s11-1946524.html 
are still relevant.

> These libraries just speed up some compression codecs BTW, so it
> should be fine to run without them too.

Yes, it works as-is but I have a need for speed :-)

Thanks,

-- 
Alan Burlison
--

Re: Building spark with native library support

Posted by Matei Zaharia <ma...@gmail.com>.
Is it an error, or just a warning? In any case, you need to get those libraries from a build of Hadoop for your platform. Then add them to the SPARK_LIBRARY_PATH environment variable in conf/spark-env.sh, or to your -Djava.library.path if launching an application separately.

These libraries just speed up some compression codecs BTW, so it should be fine to run without them too.

Matei

On Mar 6, 2014, at 9:04 AM, Alan Burlison <Al...@oracle.com> wrote:

> Hi,
> 
> I've successfully built 0.9.0-incubating on Solaris using sbt, following the instructions at http://spark.incubator.apache.org/docs/latest/ and it seems to work OK. However, when I start it up I get an error about missing Hadoop native libraries. I can't find any mention of how to build the native components in the instructions, how is that done?
> 
> Thanks,
> 
> -- 
> Alan Burlison
> --