You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Bilinmek Istemiyor <be...@gmail.com> on 2015/10/25 15:56:37 UTC

Newbie Help for spark compilation problem

I am just starting out apache spark. I hava zero knowledge about the spark
environment, scala and sbt. I have a built problems which I could not
solve. Any help much appreciated.

I am using kubuntu 14.04, java "1.7.0_80, scala 2.11.7 and spark 1.5.1

I tried to compile spark from source an and receive following errors

[0m[[31merror[0m] [0mimpossible to get artifacts when data has not been
loaded. IvyNode = org.scala-lang#scala-library;2.10.3[0m
[0m[[31merror[0m] [0m(hive/*:[31mupdate[0m)
java.lang.IllegalStateException: impossible to get artifacts when data has
not been loaded. IvyNode = org.scala-lang#scala-library;2.10.3[0m
[0m[[31merror[0m] [0m(streaming-flume-sink/avro:[31mgenerate[0m)
org.apache.avro.SchemaParseException: Undefined name: "strıng"[0m
[0m[[31merror[0m] [0m(streaming-kafka-assembly/*:[31massembly[0m)
java.util.zip.ZipException: duplicate entry: META-INF/MANIFEST.MF[0m
[0m[[31merror[0m] [0m(streaming-mqtt/test:[31massembly[0m)
java.util.zip.ZipException: duplicate entry: META-INF/MANIFEST.MF[0m
[0m[[31merror[0m] [0m(assembly/*:[31massembly[0m)
java.util.zip.ZipException: duplicate entry: META-INF/MANIFEST.MF[0m
[0m[[31merror[0m] [0m(streaming-mqtt-assembly/*:[31massembly[0m)
java.util.zip.ZipException: duplicate entry: META-INF/MANIFEST.MF[0m
[0m[[31merror[0m] [0mTotal time: 1128 s, completed 25.Eki.2015 11:00:52[0m

Sorry about some strange characters. I tried to capture the output with

sbt clean assembly 2>&1 | tee compile.txt

compile.txt was full of these characters.  I have attached the output of
full compile process "compile.txt".

Re: Newbie Help for spark compilation problem

Posted by Todd Nist <ts...@gmail.com>.
So yes the individual artifacts are released however, there is no
deployable bundle prebuilt for Spark 1.5.1 and Scala 2.11.7, something
like:  spark-1.5.1-bin-hadoop-2.6_scala-2.11.tgz.  The spark site even
states this:

*Note: Scala 2.11 users should download the Spark source package and
build with Scala 2.11 support
<http://spark.apache.org/docs/latest/building-spark.html#building-for-scala-211>.*
So if you want one simple deployable, for a standalone environment I
thought you had to perform the make-distribution like I described.

Clearly the individual artifacts are there as you state, is there a
provided 2.11 tgz available as well?  I did not think there was, if there
is then should the documentation on the download site be changed to reflect
this?

Sorry for the confusion.

-Todd

On Sun, Oct 25, 2015 at 4:07 PM, Sean Owen <so...@cloudera.com> wrote:

> No, 2.11 artifacts are in fact published:
> http://search.maven.org/#search%7Cga%7C1%7Ca%3A%22spark-parent_2.11%22
>
> On Sun, Oct 25, 2015 at 7:37 PM, Todd Nist <ts...@gmail.com> wrote:
> > Sorry Sean you are absolutely right it supports 2.11 all o meant is
> there is
> > no release available as a standard download and that one has to build it.
> > Thanks for the clairification.
> > -Todd
> >
>

Re: Newbie Help for spark compilation problem

Posted by Sean Owen <so...@cloudera.com>.
No, 2.11 artifacts are in fact published:
http://search.maven.org/#search%7Cga%7C1%7Ca%3A%22spark-parent_2.11%22

On Sun, Oct 25, 2015 at 7:37 PM, Todd Nist <ts...@gmail.com> wrote:
> Sorry Sean you are absolutely right it supports 2.11 all o meant is there is
> no release available as a standard download and that one has to build it.
> Thanks for the clairification.
> -Todd
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Newbie Help for spark compilation problem

Posted by Todd Nist <ts...@gmail.com>.
Sorry Sean you are absolutely right it supports 2.11 all o meant is there
is no release available as a standard download and that one has to build
it.  Thanks for the clairification.
-Todd

On Sunday, October 25, 2015, Sean Owen <so...@cloudera.com> wrote:

> Hm, why do you say it doesn't support 2.11? It does.
>
> It is not even this difficult; you just need a source distribution,
> and then run "./dev/change-scala-version.sh 2.11" as you say. Then
> build as normal
>
> On Sun, Oct 25, 2015 at 4:00 PM, Todd Nist <tsindotg@gmail.com
> <javascript:;>> wrote:
> > Hi Bilnmek,
> >
> > Spark 1.5.x does not support Scala 2.11.7 so the easiest thing to do it
> > build it like your trying.  Here are the steps I followed to build it on
> a
> > Max OS X 10.10.5 environment, should be very similar on ubuntu.
> >
> > 1.  set theJAVA_HOME environment variable in my bash session via export
> > JAVA_HOME=$(/usr/libexec/java_home).
> > 2. Spark is easiest to build with Maven so insure maven is installed, I
> > installed 3.3.x.
> > 3.  Download the source form Spark's site and extract.
> > 4.  Change into the spark-1.5.1 folder and run:
> >        ./dev/change-scala-version.sh 2.11
> > 5.  Issue the following command to build and create a distribution;
> >
> > ./make-distribution.sh --name hadoop-2.6_scala-2.11 --tgz -Pyarn
> > -Phadoop-2.6 -Dhadoop.version=2.6.0 -Dscala-2.11 -DskipTests
> >
> > This will provide you with a a fully self-contained installation of Spark
> > for Scala 2.11 including scripts and the like.  There are some
> limitations
> > see this,
> >
> http://spark.apache.org/docs/latest/building-spark.html#building-for-scala-211
> ,
> > for what is not supported.
> >
> > HTH,
> >
> > -Todd
> >
> >
> > On Sun, Oct 25, 2015 at 10:56 AM, Bilinmek Istemiyor <
> benibilme@gmail.com <javascript:;>>
> > wrote:
> >>
> >>
> >> I am just starting out apache spark. I hava zero knowledge about the
> spark
> >> environment, scala and sbt. I have a built problems which I could not
> solve.
> >> Any help much appreciated.
> >>
> >> I am using kubuntu 14.04, java "1.7.0_80, scala 2.11.7 and spark 1.5.1
> >>
> >> I tried to compile spark from source an and receive following errors
> >>
> >> [0m[[31merror[0m] [0mimpossible to get artifacts when data has not been
> >> loaded. IvyNode = org.scala-lang#scala-library;2.10.3[0m
> >> [0m[[31merror[0m] [0m(hive/*:[31mupdate[0m)
> >> java.lang.IllegalStateException: impossible to get artifacts when data
> has
> >> not been loaded. IvyNode = org.scala-lang#scala-library;2.10.3[0m
> >> [0m[[31merror[0m] [0m(streaming-flume-sink/avro:[31mgenerate[0m)
> >> org.apache.avro.SchemaParseException: Undefined name: "strıng"[0m
> >> [0m[[31merror[0m] [0m(streaming-kafka-assembly/*:[31massembly[0m)
> >> java.util.zip.ZipException: duplicate entry: META-INF/MANIFEST.MF[0m
> >> [0m[[31merror[0m] [0m(streaming-mqtt/test:[31massembly[0m)
> >> java.util.zip.ZipException: duplicate entry: META-INF/MANIFEST.MF[0m
> >> [0m[[31merror[0m] [0m(assembly/*:[31massembly[0m)
> >> java.util.zip.ZipException: duplicate entry: META-INF/MANIFEST.MF[0m
> >> [0m[[31merror[0m] [0m(streaming-mqtt-assembly/*:[31massembly[0m)
> >> java.util.zip.ZipException: duplicate entry: META-INF/MANIFEST.MF[0m
> >> [0m[[31merror[0m] [0mTotal time: 1128 s, completed 25.Eki.2015
> 11:00:52[0m
> >>
> >> Sorry about some strange characters. I tried to capture the output with
> >>
> >> sbt clean assembly 2>&1 | tee compile.txt
> >>
> >> compile.txt was full of these characters.  I have attached the output of
> >> full compile process "compile.txt".
> >>
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> <javascript:;>
> >> For additional commands, e-mail: user-help@spark.apache.org
> <javascript:;>
> >
> >
>

Re: Newbie Help for spark compilation problem

Posted by Sean Owen <so...@cloudera.com>.
Hm, why do you say it doesn't support 2.11? It does.

It is not even this difficult; you just need a source distribution,
and then run "./dev/change-scala-version.sh 2.11" as you say. Then
build as normal

On Sun, Oct 25, 2015 at 4:00 PM, Todd Nist <ts...@gmail.com> wrote:
> Hi Bilnmek,
>
> Spark 1.5.x does not support Scala 2.11.7 so the easiest thing to do it
> build it like your trying.  Here are the steps I followed to build it on a
> Max OS X 10.10.5 environment, should be very similar on ubuntu.
>
> 1.  set theJAVA_HOME environment variable in my bash session via export
> JAVA_HOME=$(/usr/libexec/java_home).
> 2. Spark is easiest to build with Maven so insure maven is installed, I
> installed 3.3.x.
> 3.  Download the source form Spark's site and extract.
> 4.  Change into the spark-1.5.1 folder and run:
>        ./dev/change-scala-version.sh 2.11
> 5.  Issue the following command to build and create a distribution;
>
> ./make-distribution.sh --name hadoop-2.6_scala-2.11 --tgz -Pyarn
> -Phadoop-2.6 -Dhadoop.version=2.6.0 -Dscala-2.11 -DskipTests
>
> This will provide you with a a fully self-contained installation of Spark
> for Scala 2.11 including scripts and the like.  There are some limitations
> see this,
> http://spark.apache.org/docs/latest/building-spark.html#building-for-scala-211,
> for what is not supported.
>
> HTH,
>
> -Todd
>
>
> On Sun, Oct 25, 2015 at 10:56 AM, Bilinmek Istemiyor <be...@gmail.com>
> wrote:
>>
>>
>> I am just starting out apache spark. I hava zero knowledge about the spark
>> environment, scala and sbt. I have a built problems which I could not solve.
>> Any help much appreciated.
>>
>> I am using kubuntu 14.04, java "1.7.0_80, scala 2.11.7 and spark 1.5.1
>>
>> I tried to compile spark from source an and receive following errors
>>
>> [0m[[31merror[0m] [0mimpossible to get artifacts when data has not been
>> loaded. IvyNode = org.scala-lang#scala-library;2.10.3[0m
>> [0m[[31merror[0m] [0m(hive/*:[31mupdate[0m)
>> java.lang.IllegalStateException: impossible to get artifacts when data has
>> not been loaded. IvyNode = org.scala-lang#scala-library;2.10.3[0m
>> [0m[[31merror[0m] [0m(streaming-flume-sink/avro:[31mgenerate[0m)
>> org.apache.avro.SchemaParseException: Undefined name: "strıng"[0m
>> [0m[[31merror[0m] [0m(streaming-kafka-assembly/*:[31massembly[0m)
>> java.util.zip.ZipException: duplicate entry: META-INF/MANIFEST.MF[0m
>> [0m[[31merror[0m] [0m(streaming-mqtt/test:[31massembly[0m)
>> java.util.zip.ZipException: duplicate entry: META-INF/MANIFEST.MF[0m
>> [0m[[31merror[0m] [0m(assembly/*:[31massembly[0m)
>> java.util.zip.ZipException: duplicate entry: META-INF/MANIFEST.MF[0m
>> [0m[[31merror[0m] [0m(streaming-mqtt-assembly/*:[31massembly[0m)
>> java.util.zip.ZipException: duplicate entry: META-INF/MANIFEST.MF[0m
>> [0m[[31merror[0m] [0mTotal time: 1128 s, completed 25.Eki.2015 11:00:52[0m
>>
>> Sorry about some strange characters. I tried to capture the output with
>>
>> sbt clean assembly 2>&1 | tee compile.txt
>>
>> compile.txt was full of these characters.  I have attached the output of
>> full compile process "compile.txt".
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
>> For additional commands, e-mail: user-help@spark.apache.org
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Newbie Help for spark compilation problem

Posted by Ted Yu <yu...@gmail.com>.
A dependency couldn't be downloaded:

[INFO] +- com.h2database:h2:jar:1.4.183:test

Have you checked your network settings ?

Cheers

On Sun, Oct 25, 2015 at 10:22 AM, Bilinmek Istemiyor <be...@gmail.com>
wrote:

> Thank you for the quick reply. You are God Send. I have long not been
> programming in java, nothing know about maven, scala, sbt ant spark stuff.
> I used java 7 since build failed with java 8. Which java version do you
> advise in general to use spark. I can downgrade scala version as well. Can
> you advise me a version number. I did not see any information at spark
> build page.
>
> I  followed your directions. However build finished with following output.
> I do not know if it is normal SQL Project build failure.
>
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Spark Project Parent POM ........................... SUCCESS [04:49
> min]
> [INFO] Spark Project Launcher ............................. SUCCESS [04:14
> min]
> [INFO] Spark Project Networking ........................... SUCCESS [
> 20.354 s]
> [INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [
> 5.287 s]
> [INFO] Spark Project Unsafe ............................... SUCCESS [
> 18.095 s]
> [INFO] Spark Project Core ................................. SUCCESS [12:59
> min]
> [INFO] Spark Project Bagel ................................ SUCCESS [02:54
> min]
> [INFO] Spark Project GraphX ............................... SUCCESS [
> 29.764 s]
> [INFO] Spark Project Streaming ............................ SUCCESS [01:12
> min]
> [INFO] Spark Project Catalyst ............................. SUCCESS [02:58
> min]
> [INFO] Spark Project SQL .................................. FAILURE [02:50
> min]
> [INFO] Spark Project ML Library ........................... SKIPPED
> [INFO] Spark Project Tools ................................ SKIPPED
> [INFO] Spark Project Hive ................................. SKIPPED
> [INFO] Spark Project REPL ................................. SKIPPED
> [INFO] Spark Project YARN ................................. SKIPPED
> [INFO] Spark Project Assembly ............................. SKIPPED
> [INFO] Spark Project External Twitter ..................... SKIPPED
> [INFO] Spark Project External Flume Sink .................. SKIPPED
> [INFO] Spark Project External Flume ....................... SKIPPED
> [INFO] Spark Project External Flume Assembly .............. SKIPPED
> [INFO] Spark Project External MQTT ........................ SKIPPED
> [INFO] Spark Project External MQTT Assembly ............... SKIPPED
> [INFO] Spark Project External ZeroMQ ...................... SKIPPED
> [INFO] Spark Project External Kafka ....................... SKIPPED
> [INFO] Spark Project Examples ............................. SKIPPED
> [INFO] Spark Project External Kafka Assembly .............. SKIPPED
> [INFO] Spark Project YARN Shuffle Service ................. SKIPPED
> [INFO]
> ------------------------------------------------------------------------
> [INFO] BUILD FAILURE
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Total time: 33:15 min
> [INFO] Finished at: 2015-10-25T19:12:15+02:00
> [INFO] Final Memory: 63M/771M
> [INFO]
> ------------------------------------------------------------------------
> [ERROR] Failed to execute goal on project spark-sql_2.11: Could not
> resolve dependencies for project org.apache.spark:spark-sql_2.11:jar:1.5.1:
> Could not transfer artifact com.h2database:h2:jar:1.4.183 from/to central (
> https://repo1.maven.org/maven2): GET request of:
> com/h2database/h2/1.4.183/h2-1.4.183.jar from central failed: SSL peer shut
> down incorrectly -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the
> -e switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions,
> please read the following articles:
> [ERROR] [Help 1]
> http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the
> command
> [ERROR]   mvn <goals> -rf :spark-sql_2.11
>
>
> On Sun, Oct 25, 2015 at 6:00 PM, Todd Nist <ts...@gmail.com> wrote:
>
>> Hi Bilnmek,
>>
>> Spark 1.5.x does not support Scala 2.11.7 so the easiest thing to do it
>> build it like your trying.  Here are the steps I followed to build it on a
>> Max OS X 10.10.5 environment, should be very similar on ubuntu.
>>
>> 1.  set theJAVA_HOME environment variable in my bash session via export
>> JAVA_HOME=$(/usr/libexec/java_home).
>> 2. Spark is easiest to build with Maven so insure maven is installed, I
>> installed 3.3.x.
>> 3.  Download the source form Spark's site and extract.
>> 4.  Change into the spark-1.5.1 folder and run:
>>        ./dev/change-scala-version.sh 2.11
>> 5.  Issue the following command to build and create a distribution;
>>
>> ./make-distribution.sh --name hadoop-2.6_scala-2.11 --tgz -Pyarn
>> -Phadoop-2.6 -Dhadoop.version=2.6.0 -Dscala-2.11 -DskipTests
>>
>> This will provide you with a a fully self-contained installation of Spark
>> for Scala 2.11 including scripts and the like.  There are some limitations
>> see this,
>> http://spark.apache.org/docs/latest/building-spark.html#building-for-scala-211,
>> for what is not supported.
>>
>> HTH,
>>
>> -Todd
>>
>> On Sun, Oct 25, 2015 at 10:56 AM, Bilinmek Istemiyor <benibilme@gmail.com
>> > wrote:
>>
>>>
>>> I am just starting out apache spark. I hava zero knowledge about the
>>> spark environment, scala and sbt. I have a built problems which I could not
>>> solve. Any help much appreciated.
>>>
>>> I am using kubuntu 14.04, java "1.7.0_80, scala 2.11.7 and spark 1.5.1
>>>
>>> I tried to compile spark from source an and receive following errors
>>>
>>> [0m[[31merror[0m] [0mimpossible to get artifacts when data has not been
>>> loaded. IvyNode = org.scala-lang#scala-library;2.10.3[0m
>>> [0m[[31merror[0m] [0m(hive/*:[31mupdate[0m)
>>> java.lang.IllegalStateException: impossible to get artifacts when data has
>>> not been loaded. IvyNode = org.scala-lang#scala-library;2.10.3[0m
>>> [0m[[31merror[0m] [0m(streaming-flume-sink/avro:[31mgenerate[0m)
>>> org.apache.avro.SchemaParseException: Undefined name: "strıng"[0m
>>> [0m[[31merror[0m] [0m(streaming-kafka-assembly/*:[31massembly[0m)
>>> java.util.zip.ZipException: duplicate entry: META-INF/MANIFEST.MF[0m
>>> [0m[[31merror[0m] [0m(streaming-mqtt/test:[31massembly[0m)
>>> java.util.zip.ZipException: duplicate entry: META-INF/MANIFEST.MF[0m
>>> [0m[[31merror[0m] [0m(assembly/*:[31massembly[0m)
>>> java.util.zip.ZipException: duplicate entry: META-INF/MANIFEST.MF[0m
>>> [0m[[31merror[0m] [0m(streaming-mqtt-assembly/*:[31massembly[0m)
>>> java.util.zip.ZipException: duplicate entry: META-INF/MANIFEST.MF[0m
>>> [0m[[31merror[0m] [0mTotal time: 1128 s, completed 25.Eki.2015
>>> 11:00:52[0m
>>>
>>> Sorry about some strange characters. I tried to capture the output with
>>>
>>> sbt clean assembly 2>&1 | tee compile.txt
>>>
>>> compile.txt was full of these characters.  I have attached the output of
>>> full compile process "compile.txt".
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
>>> For additional commands, e-mail: user-help@spark.apache.org
>>>
>>
>>
>

Re: Newbie Help for spark compilation problem

Posted by Bilinmek Istemiyor <be...@gmail.com>.
Thank you for the quick reply. You are God Send. I have long not been
programming in java, nothing know about maven, scala, sbt ant spark stuff.
I used java 7 since build failed with java 8. Which java version do you
advise in general to use spark. I can downgrade scala version as well. Can
you advise me a version number. I did not see any information at spark
build page.

I  followed your directions. However build finished with following output.
I do not know if it is normal SQL Project build failure.

[INFO]
------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Spark Project Parent POM ........................... SUCCESS [04:49
min]
[INFO] Spark Project Launcher ............................. SUCCESS [04:14
min]
[INFO] Spark Project Networking ........................... SUCCESS [
20.354 s]
[INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [
5.287 s]
[INFO] Spark Project Unsafe ............................... SUCCESS [
18.095 s]
[INFO] Spark Project Core ................................. SUCCESS [12:59
min]
[INFO] Spark Project Bagel ................................ SUCCESS [02:54
min]
[INFO] Spark Project GraphX ............................... SUCCESS [
29.764 s]
[INFO] Spark Project Streaming ............................ SUCCESS [01:12
min]
[INFO] Spark Project Catalyst ............................. SUCCESS [02:58
min]
[INFO] Spark Project SQL .................................. FAILURE [02:50
min]
[INFO] Spark Project ML Library ........................... SKIPPED
[INFO] Spark Project Tools ................................ SKIPPED
[INFO] Spark Project Hive ................................. SKIPPED
[INFO] Spark Project REPL ................................. SKIPPED
[INFO] Spark Project YARN ................................. SKIPPED
[INFO] Spark Project Assembly ............................. SKIPPED
[INFO] Spark Project External Twitter ..................... SKIPPED
[INFO] Spark Project External Flume Sink .................. SKIPPED
[INFO] Spark Project External Flume ....................... SKIPPED
[INFO] Spark Project External Flume Assembly .............. SKIPPED
[INFO] Spark Project External MQTT ........................ SKIPPED
[INFO] Spark Project External MQTT Assembly ............... SKIPPED
[INFO] Spark Project External ZeroMQ ...................... SKIPPED
[INFO] Spark Project External Kafka ....................... SKIPPED
[INFO] Spark Project Examples ............................. SKIPPED
[INFO] Spark Project External Kafka Assembly .............. SKIPPED
[INFO] Spark Project YARN Shuffle Service ................. SKIPPED
[INFO]
------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO]
------------------------------------------------------------------------
[INFO] Total time: 33:15 min
[INFO] Finished at: 2015-10-25T19:12:15+02:00
[INFO] Final Memory: 63M/771M
[INFO]
------------------------------------------------------------------------
[ERROR] Failed to execute goal on project spark-sql_2.11: Could not resolve
dependencies for project org.apache.spark:spark-sql_2.11:jar:1.5.1: Could
not transfer artifact com.h2database:h2:jar:1.4.183 from/to central (
https://repo1.maven.org/maven2): GET request of:
com/h2database/h2/1.4.183/h2-1.4.183.jar from central failed: SSL peer shut
down incorrectly -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions,
please read the following articles:
[ERROR] [Help 1]
http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the
command
[ERROR]   mvn <goals> -rf :spark-sql_2.11


On Sun, Oct 25, 2015 at 6:00 PM, Todd Nist <ts...@gmail.com> wrote:

> Hi Bilnmek,
>
> Spark 1.5.x does not support Scala 2.11.7 so the easiest thing to do it
> build it like your trying.  Here are the steps I followed to build it on a
> Max OS X 10.10.5 environment, should be very similar on ubuntu.
>
> 1.  set theJAVA_HOME environment variable in my bash session via export
> JAVA_HOME=$(/usr/libexec/java_home).
> 2. Spark is easiest to build with Maven so insure maven is installed, I
> installed 3.3.x.
> 3.  Download the source form Spark's site and extract.
> 4.  Change into the spark-1.5.1 folder and run:
>        ./dev/change-scala-version.sh 2.11
> 5.  Issue the following command to build and create a distribution;
>
> ./make-distribution.sh --name hadoop-2.6_scala-2.11 --tgz -Pyarn
> -Phadoop-2.6 -Dhadoop.version=2.6.0 -Dscala-2.11 -DskipTests
>
> This will provide you with a a fully self-contained installation of Spark
> for Scala 2.11 including scripts and the like.  There are some limitations
> see this,
> http://spark.apache.org/docs/latest/building-spark.html#building-for-scala-211,
> for what is not supported.
>
> HTH,
>
> -Todd
>
> On Sun, Oct 25, 2015 at 10:56 AM, Bilinmek Istemiyor <be...@gmail.com>
> wrote:
>
>>
>> I am just starting out apache spark. I hava zero knowledge about the
>> spark environment, scala and sbt. I have a built problems which I could not
>> solve. Any help much appreciated.
>>
>> I am using kubuntu 14.04, java "1.7.0_80, scala 2.11.7 and spark 1.5.1
>>
>> I tried to compile spark from source an and receive following errors
>>
>> [0m[[31merror[0m] [0mimpossible to get artifacts when data has not been
>> loaded. IvyNode = org.scala-lang#scala-library;2.10.3[0m
>> [0m[[31merror[0m] [0m(hive/*:[31mupdate[0m)
>> java.lang.IllegalStateException: impossible to get artifacts when data has
>> not been loaded. IvyNode = org.scala-lang#scala-library;2.10.3[0m
>> [0m[[31merror[0m] [0m(streaming-flume-sink/avro:[31mgenerate[0m)
>> org.apache.avro.SchemaParseException: Undefined name: "strıng"[0m
>> [0m[[31merror[0m] [0m(streaming-kafka-assembly/*:[31massembly[0m)
>> java.util.zip.ZipException: duplicate entry: META-INF/MANIFEST.MF[0m
>> [0m[[31merror[0m] [0m(streaming-mqtt/test:[31massembly[0m)
>> java.util.zip.ZipException: duplicate entry: META-INF/MANIFEST.MF[0m
>> [0m[[31merror[0m] [0m(assembly/*:[31massembly[0m)
>> java.util.zip.ZipException: duplicate entry: META-INF/MANIFEST.MF[0m
>> [0m[[31merror[0m] [0m(streaming-mqtt-assembly/*:[31massembly[0m)
>> java.util.zip.ZipException: duplicate entry: META-INF/MANIFEST.MF[0m
>> [0m[[31merror[0m] [0mTotal time: 1128 s, completed 25.Eki.2015 11:00:52[0m
>>
>> Sorry about some strange characters. I tried to capture the output with
>>
>> sbt clean assembly 2>&1 | tee compile.txt
>>
>> compile.txt was full of these characters.  I have attached the output of
>> full compile process "compile.txt".
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
>> For additional commands, e-mail: user-help@spark.apache.org
>>
>
>

Re: Newbie Help for spark compilation problem

Posted by Todd Nist <ts...@gmail.com>.
Hi Bilnmek,

Spark 1.5.x does not support Scala 2.11.7 so the easiest thing to do it
build it like your trying.  Here are the steps I followed to build it on a
Max OS X 10.10.5 environment, should be very similar on ubuntu.

1.  set theJAVA_HOME environment variable in my bash session via export
JAVA_HOME=$(/usr/libexec/java_home).
2. Spark is easiest to build with Maven so insure maven is installed, I
installed 3.3.x.
3.  Download the source form Spark's site and extract.
4.  Change into the spark-1.5.1 folder and run:
       ./dev/change-scala-version.sh 2.11
5.  Issue the following command to build and create a distribution;

./make-distribution.sh --name hadoop-2.6_scala-2.11 --tgz -Pyarn
-Phadoop-2.6 -Dhadoop.version=2.6.0 -Dscala-2.11 -DskipTests

This will provide you with a a fully self-contained installation of Spark
for Scala 2.11 including scripts and the like.  There are some limitations
see this,
http://spark.apache.org/docs/latest/building-spark.html#building-for-scala-211,
for what is not supported.

HTH,

-Todd

On Sun, Oct 25, 2015 at 10:56 AM, Bilinmek Istemiyor <be...@gmail.com>
wrote:

>
> I am just starting out apache spark. I hava zero knowledge about the spark
> environment, scala and sbt. I have a built problems which I could not
> solve. Any help much appreciated.
>
> I am using kubuntu 14.04, java "1.7.0_80, scala 2.11.7 and spark 1.5.1
>
> I tried to compile spark from source an and receive following errors
>
> [0m[[31merror[0m] [0mimpossible to get artifacts when data has not been
> loaded. IvyNode = org.scala-lang#scala-library;2.10.3[0m
> [0m[[31merror[0m] [0m(hive/*:[31mupdate[0m)
> java.lang.IllegalStateException: impossible to get artifacts when data has
> not been loaded. IvyNode = org.scala-lang#scala-library;2.10.3[0m
> [0m[[31merror[0m] [0m(streaming-flume-sink/avro:[31mgenerate[0m)
> org.apache.avro.SchemaParseException: Undefined name: "strıng"[0m
> [0m[[31merror[0m] [0m(streaming-kafka-assembly/*:[31massembly[0m)
> java.util.zip.ZipException: duplicate entry: META-INF/MANIFEST.MF[0m
> [0m[[31merror[0m] [0m(streaming-mqtt/test:[31massembly[0m)
> java.util.zip.ZipException: duplicate entry: META-INF/MANIFEST.MF[0m
> [0m[[31merror[0m] [0m(assembly/*:[31massembly[0m)
> java.util.zip.ZipException: duplicate entry: META-INF/MANIFEST.MF[0m
> [0m[[31merror[0m] [0m(streaming-mqtt-assembly/*:[31massembly[0m)
> java.util.zip.ZipException: duplicate entry: META-INF/MANIFEST.MF[0m
> [0m[[31merror[0m] [0mTotal time: 1128 s, completed 25.Eki.2015 11:00:52[0m
>
> Sorry about some strange characters. I tried to capture the output with
>
> sbt clean assembly 2>&1 | tee compile.txt
>
> compile.txt was full of these characters.  I have attached the output of
> full compile process "compile.txt".
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>