You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@zeppelin.apache.org by Daniel Valdivia <ho...@danielvaldivia.com> on 2016/02/02 00:58:56 UTC

Upgrade spark to 1.6.0

Hi,

I'd like to ask if there's an easy way to upgrade spark to 1.6.0 from the current 1.4.x that's bundled with the current release of zepellin, would updating the pom.xml and compiling suffice ?

Cheers

Re: Upgrade spark to 1.6.0

Posted by Felix Cheung <fe...@hotmail.com>.
You problem is likely with -Pvendor-repo if you are not running with Cloudera CDH.
Try this

mvn clean package -Pspark-1.6 -Dspark.version=1.6.0 -Phadoop-2.6 -Pyarn -Ppyspark -DskipTests




On Wed, Feb 3, 2016 at 11:43 AM -0800, "Daniel Valdivia" <ho...@danielvaldivia.com> wrote:





I don't need any specific version of Hadoop, I actually removed it from the build command and still get the error, I just need spark 1.6


> On Feb 3, 2016, at 9:05 AM, Felix Cheung <fe...@hotmail.com> wrote:
>
> I think his build command only works with Cloudera CDH 5.4.8, as you can see. Mismatch Akka version is very common if the Hadoop distribution is different. What version of Spark and Hadoop distribution are you running with?
>
>
>
>
>
> On Tue, Feb 2, 2016 at 1:36 PM -0800, "Daniel Valdivia" <hola@danielvaldivia.com <ma...@danielvaldivia.com>> wrote:
>
> Hello,
>
> An update on the matter, using compile string
>
> mvn clean package -Pspark-1.6 -Dspark.version=1.6.0 -Dhadoop.version=2.6.0-cdh5.4.8 -Phadoop-2.6 -Pyarn -Ppyspark -Pvendor-repo -DskipTests
>
> I end up getting the following error stack trace upon executing a new JSON
>
> akka.ConfigurationException: Akka JAR version [2.2.3] does not match the provided config version [2.3.11] at akka.actor.ActorSystem$Settings.<init>(ActorSystem.scala:181) at akka.actor.ActorSystemImpl.<init>(ActorSystem.scala:470) at akka.actor.ActorSystem$.apply(ActorSystem.scala:111) at akka.actor.ActorSystem$.apply(ActorSystem.scala:104) at org.apache.spark.util.AkkaUtils$.org$apache$spark$util$AkkaUtils$$doCreateActorSystem(AkkaUtils.scala:121) at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:53) at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:52) at org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1964) at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141) at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1955) at org.apache.spark.util.AkkaUtils$.createActorSystem(AkkaUtils.scala:55) at org.apache.spark.SparkEnv$.create(SparkEnv.scala:266) at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:193) at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:288) at org.apache.spark.SparkContext.<init>(SparkContext.scala:457) at org.apache.zeppelin.spark.SparkInterpreter.createSparkContext(SparkInterpreter.java:339) at org.apache.zeppelin.spark.SparkInterpreter.getSparkContext(SparkInterpreter.java:145) at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:465)  at org.apache.zeppelin.interpreter.ClassloaderInterpreter.open(ClassloaderInterpreter.java:74) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:68) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:92) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:300) at org.apache.zeppelin.scheduler.Job.run(Job.java:169) at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:134) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)
>
> There's some mentioning of this problem on SO, but seems like it was fixed
>
> http://stackoverflow.com/questions/32294276/how-to-connect-zeppelin-to-spark-1-5-built-from-the-sources <http://stackoverflow.com/questions/32294276/how-to-connect-zeppelin-to-spark-1-5-built-from-the-sources>
>
> any idea on how to deal with this AKKA library problem?
>
>> On Feb 2, 2016, at 12:02 PM, Daniel Valdivia <hola@danielvaldivia.com <ma...@danielvaldivia.com>> wrote:
>>
>> Hi,
>>
>> Thanks for the suggestion, I'm running maven with Ben's command
>>
>> Cheers!
>>
>>> On Feb 1, 2016, at 7:47 PM, Benjamin Kim <bbuild11@gmail.com <ma...@gmail.com>> wrote:
>>>
>>> Hi Felix,
>>>
>>> After installing Spark 1.6, I built Zeppelin using:
>>>
>>> mvn clean package -Pspark-1.6 -Dspark.version=1.6.0 -Dhadoop.version=2.6.0-cdh5.4.8 -Phadoop-2.6 -Pyarn -Ppyspark -Pvendor-repo -DskipTests
>>>
>>> This worked for me.
>>>
>>> Cheers,
>>> Ben
>>>
>>>
>>>> On Feb 1, 2016, at 7:44 PM, Felix Cheung <felixcheung_m@hotmail.com <ma...@hotmail.com>> wrote:
>>>>
>>>> Hi
>>>>
>>>> You can see the build command line example here for spark 1.6 profile
>>>>
>>>> https://github.com/apache/incubator-zeppelin/blob/master/README.md <https://github.com/apache/incubator-zeppelin/blob/master/README.md>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Mon, Feb 1, 2016 at 3:59 PM -0800, "Daniel Valdivia" <hola@danielvaldivia.com <ma...@danielvaldivia.com>> wrote:
>>>>
>>>> Hi,
>>>>
>>>> I'd like to ask if there's an easy way to upgrade spark to 1.6.0 from the current 1.4.x that's bundled with the current release of zepellin, would updating the pom.xml and compiling suffice ?
>>>>
>>>> Cheers
>>>
>>
>


Re: Upgrade spark to 1.6.0

Posted by Daniel Valdivia <ho...@danielvaldivia.com>.
I don't need any specific version of Hadoop, I actually removed it from the build command and still get the error, I just need spark 1.6


> On Feb 3, 2016, at 9:05 AM, Felix Cheung <fe...@hotmail.com> wrote:
> 
> I think his build command only works with Cloudera CDH 5.4.8, as you can see. Mismatch Akka version is very common if the Hadoop distribution is different. What version of Spark and Hadoop distribution are you running with?
> 
> 
> 
> 
> 
> On Tue, Feb 2, 2016 at 1:36 PM -0800, "Daniel Valdivia" <hola@danielvaldivia.com <ma...@danielvaldivia.com>> wrote:
> 
> Hello,
> 
> An update on the matter, using compile string
> 
> mvn clean package -Pspark-1.6 -Dspark.version=1.6.0 -Dhadoop.version=2.6.0-cdh5.4.8 -Phadoop-2.6 -Pyarn -Ppyspark -Pvendor-repo -DskipTests
> 
> I end up getting the following error stack trace upon executing a new JSON
> 
> akka.ConfigurationException: Akka JAR version [2.2.3] does not match the provided config version [2.3.11] at akka.actor.ActorSystem$Settings.<init>(ActorSystem.scala:181) at akka.actor.ActorSystemImpl.<init>(ActorSystem.scala:470) at akka.actor.ActorSystem$.apply(ActorSystem.scala:111) at akka.actor.ActorSystem$.apply(ActorSystem.scala:104) at org.apache.spark.util.AkkaUtils$.org$apache$spark$util$AkkaUtils$$doCreateActorSystem(AkkaUtils.scala:121) at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:53) at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:52) at org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1964) at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141) at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1955) at org.apache.spark.util.AkkaUtils$.createActorSystem(AkkaUtils.scala:55) at org.apache.spark.SparkEnv$.create(SparkEnv.scala:266) at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:193) at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:288) at org.apache.spark.SparkContext.<init>(SparkContext.scala:457) at org.apache.zeppelin.spark.SparkInterpreter.createSparkContext(SparkInterpreter.java:339) at org.apache.zeppelin.spark.SparkInterpreter.getSparkContext(SparkInterpreter.java:145) at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:465)  at org.apache.zeppelin.interpreter.ClassloaderInterpreter.open(ClassloaderInterpreter.java:74) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:68) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:92) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:300) at org.apache.zeppelin.scheduler.Job.run(Job.java:169) at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:134) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)
> 
> There's some mentioning of this problem on SO, but seems like it was fixed
> 
> http://stackoverflow.com/questions/32294276/how-to-connect-zeppelin-to-spark-1-5-built-from-the-sources <http://stackoverflow.com/questions/32294276/how-to-connect-zeppelin-to-spark-1-5-built-from-the-sources>
> 
> any idea on how to deal with this AKKA library problem?
> 
>> On Feb 2, 2016, at 12:02 PM, Daniel Valdivia <hola@danielvaldivia.com <ma...@danielvaldivia.com>> wrote:
>> 
>> Hi,
>> 
>> Thanks for the suggestion, I'm running maven with Ben's command
>> 
>> Cheers!
>> 
>>> On Feb 1, 2016, at 7:47 PM, Benjamin Kim <bbuild11@gmail.com <ma...@gmail.com>> wrote:
>>> 
>>> Hi Felix,
>>> 
>>> After installing Spark 1.6, I built Zeppelin using:
>>> 
>>> mvn clean package -Pspark-1.6 -Dspark.version=1.6.0 -Dhadoop.version=2.6.0-cdh5.4.8 -Phadoop-2.6 -Pyarn -Ppyspark -Pvendor-repo -DskipTests
>>> 
>>> This worked for me.
>>> 
>>> Cheers,
>>> Ben
>>> 
>>> 
>>>> On Feb 1, 2016, at 7:44 PM, Felix Cheung <felixcheung_m@hotmail.com <ma...@hotmail.com>> wrote:
>>>> 
>>>> Hi
>>>> 
>>>> You can see the build command line example here for spark 1.6 profile
>>>> 
>>>> https://github.com/apache/incubator-zeppelin/blob/master/README.md <https://github.com/apache/incubator-zeppelin/blob/master/README.md>
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> On Mon, Feb 1, 2016 at 3:59 PM -0800, "Daniel Valdivia" <hola@danielvaldivia.com <ma...@danielvaldivia.com>> wrote:
>>>> 
>>>> Hi,
>>>> 
>>>> I'd like to ask if there's an easy way to upgrade spark to 1.6.0 from the current 1.4.x that's bundled with the current release of zepellin, would updating the pom.xml and compiling suffice ?
>>>> 
>>>> Cheers
>>> 
>> 
> 


Re: Upgrade spark to 1.6.0

Posted by Felix Cheung <fe...@hotmail.com>.
I think his build command only works with Cloudera CDH 5.4.8, as you can see. Mismatch Akka version is very common if the Hadoop distribution is different. What version of Spark and Hadoop distribution are you running with?






On Tue, Feb 2, 2016 at 1:36 PM -0800, "Daniel Valdivia" <ho...@danielvaldivia.com> wrote:





Hello,

An update on the matter, using compile string

mvn clean package -Pspark-1.6 -Dspark.version=1.6.0 -Dhadoop.version=2.6.0-cdh5.4.8 -Phadoop-2.6 -Pyarn -Ppyspark -Pvendor-repo -DskipTests

I end up getting the following error stack trace upon executing a new JSON

akka.ConfigurationException: Akka JAR version [2.2.3] does not match the provided config version [2.3.11] at akka.actor.ActorSystem$Settings.<init>(ActorSystem.scala:181) at akka.actor.ActorSystemImpl.<init>(ActorSystem.scala:470) at akka.actor.ActorSystem$.apply(ActorSystem.scala:111) at akka.actor.ActorSystem$.apply(ActorSystem.scala:104) at org.apache.spark.util.AkkaUtils$.org$apache$spark$util$AkkaUtils$$doCreateActorSystem(AkkaUtils.scala:121) at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:53) at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:52) at org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1964) at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141) at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1955) at org.apache.spark.util.AkkaUtils$.createActorSystem(AkkaUtils.scala:55) at org.apache.spark.SparkEnv$.create(SparkEnv.scala:266) at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:193) at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:288) at org.apache.spark.SparkContext.<init>(SparkContext.scala:457) at org.apache.zeppelin.spark.SparkInterpreter.createSparkContext(SparkInterpreter.java:339) at org.apache.zeppelin.spark.SparkInterpreter.getSparkContext(SparkInterpreter.java:145) at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:465) at org.apache.zeppelin.interpreter.ClassloaderInterpreter.open(ClassloaderInterpreter.java:74) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:68) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:92) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:300) at org.apache.zeppelin.scheduler.Job.run(Job.java:169) at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:134) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)

There's some mentioning of this problem on SO, but seems like it was fixed

http://stackoverflow.com/questions/32294276/how-to-connect-zeppelin-to-spark-1-5-built-from-the-sources <http://stackoverflow.com/questions/32294276/how-to-connect-zeppelin-to-spark-1-5-built-from-the-sources>

any idea on how to deal with this AKKA library problem?

> On Feb 2, 2016, at 12:02 PM, Daniel Valdivia <ho...@danielvaldivia.com> wrote:
>
> Hi,
>
> Thanks for the suggestion, I'm running maven with Ben's command
>
> Cheers!
>
>> On Feb 1, 2016, at 7:47 PM, Benjamin Kim <bbuild11@gmail.com <ma...@gmail.com>> wrote:
>>
>> Hi Felix,
>>
>> After installing Spark 1.6, I built Zeppelin using:
>>
>> mvn clean package -Pspark-1.6 -Dspark.version=1.6.0 -Dhadoop.version=2.6.0-cdh5.4.8 -Phadoop-2.6 -Pyarn -Ppyspark -Pvendor-repo -DskipTests
>>
>> This worked for me.
>>
>> Cheers,
>> Ben
>>
>>
>>> On Feb 1, 2016, at 7:44 PM, Felix Cheung <felixcheung_m@hotmail.com <ma...@hotmail.com>> wrote:
>>>
>>> Hi
>>>
>>> You can see the build command line example here for spark 1.6 profile
>>>
>>> https://github.com/apache/incubator-zeppelin/blob/master/README.md <https://github.com/apache/incubator-zeppelin/blob/master/README.md>
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Feb 1, 2016 at 3:59 PM -0800, "Daniel Valdivia" <hola@danielvaldivia.com <ma...@danielvaldivia.com>> wrote:
>>>
>>> Hi,
>>>
>>> I'd like to ask if there's an easy way to upgrade spark to 1.6.0 from the current 1.4.x that's bundled with the current release of zepellin, would updating the pom.xml and compiling suffice ?
>>>
>>> Cheers
>>
>


Re: Upgrade spark to 1.6.0

Posted by Daniel Valdivia <ho...@danielvaldivia.com>.
Hello,

An update on the matter, using compile string

mvn clean package -Pspark-1.6 -Dspark.version=1.6.0 -Dhadoop.version=2.6.0-cdh5.4.8 -Phadoop-2.6 -Pyarn -Ppyspark -Pvendor-repo -DskipTests

I end up getting the following error stack trace upon executing a new JSON

akka.ConfigurationException: Akka JAR version [2.2.3] does not match the provided config version [2.3.11] at akka.actor.ActorSystem$Settings.<init>(ActorSystem.scala:181) at akka.actor.ActorSystemImpl.<init>(ActorSystem.scala:470) at akka.actor.ActorSystem$.apply(ActorSystem.scala:111) at akka.actor.ActorSystem$.apply(ActorSystem.scala:104) at org.apache.spark.util.AkkaUtils$.org$apache$spark$util$AkkaUtils$$doCreateActorSystem(AkkaUtils.scala:121) at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:53) at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:52) at org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1964) at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141) at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1955) at org.apache.spark.util.AkkaUtils$.createActorSystem(AkkaUtils.scala:55) at org.apache.spark.SparkEnv$.create(SparkEnv.scala:266) at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:193) at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:288) at org.apache.spark.SparkContext.<init>(SparkContext.scala:457) at org.apache.zeppelin.spark.SparkInterpreter.createSparkContext(SparkInterpreter.java:339) at org.apache.zeppelin.spark.SparkInterpreter.getSparkContext(SparkInterpreter.java:145) at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:465) at org.apache.zeppelin.interpreter.ClassloaderInterpreter.open(ClassloaderInterpreter.java:74) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:68) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:92) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:300) at org.apache.zeppelin.scheduler.Job.run(Job.java:169) at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:134) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)

There's some mentioning of this problem on SO, but seems like it was fixed

http://stackoverflow.com/questions/32294276/how-to-connect-zeppelin-to-spark-1-5-built-from-the-sources <http://stackoverflow.com/questions/32294276/how-to-connect-zeppelin-to-spark-1-5-built-from-the-sources>

any idea on how to deal with this AKKA library problem?

> On Feb 2, 2016, at 12:02 PM, Daniel Valdivia <ho...@danielvaldivia.com> wrote:
> 
> Hi,
> 
> Thanks for the suggestion, I'm running maven with Ben's command
> 
> Cheers!
> 
>> On Feb 1, 2016, at 7:47 PM, Benjamin Kim <bbuild11@gmail.com <ma...@gmail.com>> wrote:
>> 
>> Hi Felix,
>> 
>> After installing Spark 1.6, I built Zeppelin using:
>> 
>> mvn clean package -Pspark-1.6 -Dspark.version=1.6.0 -Dhadoop.version=2.6.0-cdh5.4.8 -Phadoop-2.6 -Pyarn -Ppyspark -Pvendor-repo -DskipTests
>> 
>> This worked for me.
>> 
>> Cheers,
>> Ben
>> 
>> 
>>> On Feb 1, 2016, at 7:44 PM, Felix Cheung <felixcheung_m@hotmail.com <ma...@hotmail.com>> wrote:
>>> 
>>> Hi
>>> 
>>> You can see the build command line example here for spark 1.6 profile
>>> 
>>> https://github.com/apache/incubator-zeppelin/blob/master/README.md <https://github.com/apache/incubator-zeppelin/blob/master/README.md>
>>> 
>>> 
>>> 
>>> 
>>> 
>>> On Mon, Feb 1, 2016 at 3:59 PM -0800, "Daniel Valdivia" <hola@danielvaldivia.com <ma...@danielvaldivia.com>> wrote:
>>> 
>>> Hi,
>>> 
>>> I'd like to ask if there's an easy way to upgrade spark to 1.6.0 from the current 1.4.x that's bundled with the current release of zepellin, would updating the pom.xml and compiling suffice ?
>>> 
>>> Cheers
>> 
> 


Re: Upgrade spark to 1.6.0

Posted by Daniel Valdivia <ho...@danielvaldivia.com>.
Hi,

Thanks for the suggestion, I'm running maven with Ben's command

Cheers!

> On Feb 1, 2016, at 7:47 PM, Benjamin Kim <bb...@gmail.com> wrote:
> 
> Hi Felix,
> 
> After installing Spark 1.6, I built Zeppelin using:
> 
> mvn clean package -Pspark-1.6 -Dspark.version=1.6.0 -Dhadoop.version=2.6.0-cdh5.4.8 -Phadoop-2.6 -Pyarn -Ppyspark -Pvendor-repo -DskipTests
> 
> This worked for me.
> 
> Cheers,
> Ben
> 
> 
>> On Feb 1, 2016, at 7:44 PM, Felix Cheung <felixcheung_m@hotmail.com <ma...@hotmail.com>> wrote:
>> 
>> Hi
>> 
>> You can see the build command line example here for spark 1.6 profile
>> 
>> https://github.com/apache/incubator-zeppelin/blob/master/README.md <https://github.com/apache/incubator-zeppelin/blob/master/README.md>
>> 
>> 
>> 
>> 
>> 
>> On Mon, Feb 1, 2016 at 3:59 PM -0800, "Daniel Valdivia" <hola@danielvaldivia.com <ma...@danielvaldivia.com>> wrote:
>> 
>> Hi,
>> 
>> I'd like to ask if there's an easy way to upgrade spark to 1.6.0 from the current 1.4.x that's bundled with the current release of zepellin, would updating the pom.xml and compiling suffice ?
>> 
>> Cheers
> 


Re: Upgrade spark to 1.6.0

Posted by Benjamin Kim <bb...@gmail.com>.
Hi Felix,

After installing Spark 1.6, I built Zeppelin using:

mvn clean package -Pspark-1.6 -Dspark.version=1.6.0 -Dhadoop.version=2.6.0-cdh5.4.8 -Phadoop-2.6 -Pyarn -Ppyspark -Pvendor-repo -DskipTests

This worked for me.

Cheers,
Ben


> On Feb 1, 2016, at 7:44 PM, Felix Cheung <fe...@hotmail.com> wrote:
> 
> Hi
> 
> You can see the build command line example here for spark 1.6 profile
> 
> https://github.com/apache/incubator-zeppelin/blob/master/README.md
> 
> 
> 
> 
> 
> On Mon, Feb 1, 2016 at 3:59 PM -0800, "Daniel Valdivia" <hola@danielvaldivia.com <ma...@danielvaldivia.com>> wrote:
> 
> Hi,
> 
> I'd like to ask if there's an easy way to upgrade spark to 1.6.0 from the current 1.4.x that's bundled with the current release of zepellin, would updating the pom.xml and compiling suffice ?
> 
> Cheers


Re: Upgrade spark to 1.6.0

Posted by Felix Cheung <fe...@hotmail.com>.
Hi
You can see the build command line example here for spark 1.6 profile
https://github.com/apache/incubator-zeppelin/blob/master/README.md





On Mon, Feb 1, 2016 at 3:59 PM -0800, "Daniel Valdivia" <ho...@danielvaldivia.com> wrote:





Hi,

I'd like to ask if there's an easy way to upgrade spark to 1.6.0 from the current 1.4.x that's bundled with the current release of zepellin, would updating the pom.xml and compiling suffice ?

Cheers