You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@zeppelin.apache.org by Boris Schminke <sc...@gmail.com> on 2016/01/05 20:38:10 UTC

Spark 1.6.0?

Hi,
why can't I use Zeppelin with Spark 1.6.0?
Probably I could do it in developers' 0.6 version compiled from sources,
couldn't I?

Regards,
Boris

RE: Spark 1.6.0?

Posted by Mu...@cognizant.com.
Hi Boris,

Developer version mean it is still under development or in beta stage. It doesn't mean you can develop and compile from sources according to your spark version. I agree that zeppelin varies according to spark version.

Thanks,
Snehit
________________________________
From: Boris Schminke [schminkeba@gmail.com]
Sent: 06 January 2016 01:08:10
To: users@zeppelin.incubator.apache.org
Subject: Spark 1.6.0?

Hi,
why can't I use Zeppelin with Spark 1.6.0?
Probably I could do it in developers' 0.6 version compiled from sources, couldn't I?

Regards,
Boris
This e-mail and any files transmitted with it are for the sole use of the intended recipient(s) and may contain confidential and privileged information. If you are not the intended recipient(s), please reply to the sender and destroy all copies of the original message. Any unauthorized review, use, disclosure, dissemination, forwarding, printing or copying of this email, and/or any action taken in reliance on the contents of this e-mail is strictly prohibited and may be unlawful. Where permitted by applicable law, this e-mail and other e-mail communications sent to and from Cognizant e-mail addresses may be monitored.

Re: Spark 1.6.0?

Posted by "Kevin (Sangwoo) Kim" <ke...@apache.org>.
I also have some issues with Spark 1.6.0

Attached stack trace below:

16/01/06 01:50:29 WARN cluster.SparkDeploySchedulerBackend: Application ID
is not initialized yet.
16/01/06 01:50:29 ERROR cluster.SparkDeploySchedulerBackend: Application
has been killed. Reason: All masters are unresponsive! Giving up.
16/01/06 01:50:29 INFO util.Utils: Successfully started service
'org.apache.spark.network.netty.NettyBlockTransferService' on port 32902.
16/01/06 01:50:29 INFO netty.NettyBlockTransferService: Server created on
32902
16/01/06 01:50:29 INFO storage.BlockManagerMaster: Trying to register
BlockManager
16/01/06 01:50:29 INFO storage.BlockManagerMasterEndpoint: Registering
block manager 172.16.187.61:32902 with 1140.4 MB RAM,
BlockManagerId(driver, 172.16.187.61, 32902)
16/01/06 01:50:29 INFO storage.BlockManagerMaster: Registered BlockManager
16/01/06 01:50:29 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
16/01/06 01:50:29 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/api,null}
16/01/06 01:50:29 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/,null}
16/01/06 01:50:29 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/static,null}
16/01/06 01:50:29 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
16/01/06 01:50:29 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/executors/threadDump,null}
16/01/06 01:50:29 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/executors/json,null}
16/01/06 01:50:29 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/executors,null}
16/01/06 01:50:29 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/environment/json,null}
16/01/06 01:50:29 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/environment,null}
16/01/06 01:50:29 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
16/01/06 01:50:29 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/storage/rdd,null}
16/01/06 01:50:29 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/storage/json,null}
16/01/06 01:50:29 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/storage,null}
16/01/06 01:50:29 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/pool/json,null}
16/01/06 01:50:29 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/pool,null}
16/01/06 01:50:29 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/stage/json,null}
16/01/06 01:50:29 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/stage,null}
16/01/06 01:50:29 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/json,null}
16/01/06 01:50:29 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages,null}
16/01/06 01:50:29 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/jobs/job/json,null}
16/01/06 01:50:29 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/jobs/job,null}
16/01/06 01:50:29 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/jobs/json,null}
16/01/06 01:50:29 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/jobs,null}
16/01/06 01:50:29 INFO ui.SparkUI: Stopped Spark web UI at
http://172.16.187.61:4041
16/01/06 01:50:29 INFO cluster.SparkDeploySchedulerBackend: Shutting down
all executors
16/01/06 01:50:29 INFO cluster.SparkDeploySchedulerBackend: Asking each
executor to shut down
16/01/06 01:50:29 WARN client.AppClient$ClientEndpoint: Drop
UnregisterApplication(null) because has not yet connected to master
16/01/06 01:50:29 ERROR spark.MapOutputTrackerMaster: Error communicating
with MapOutputTracker
java.lang.InterruptedException
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1325)
at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:208)
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
at
scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:107)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77)
at org.apache.spark.MapOutputTracker.askTracker(MapOutputTracker.scala:110)
at org.apache.spark.MapOutputTracker.sendTracker(MapOutputTracker.scala:120)
at org.apache.spark.MapOutputTrackerMaster.stop(MapOutputTracker.scala:462)
at org.apache.spark.SparkEnv.stop(SparkEnv.scala:93)
at
org.apache.spark.SparkContext$$anonfun$stop$12.apply$mcV$sp(SparkContext.scala:1756)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1229)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1755)
at
org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend.dead(SparkDeploySchedulerBackend.scala:127)
at
org.apache.spark.deploy.client.AppClient$ClientEndpoint.markDead(AppClient.scala:264)
at
org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2$$anonfun$run$1.apply$mcV$sp(AppClient.scala:134)
at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1163)
at
org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2.run(AppClient.scala:129)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
16/01/06 01:50:29 INFO spark.MapOutputTrackerMasterEndpoint:
MapOutputTrackerMasterEndpoint stopped!
16/01/06 01:50:29 ERROR util.Utils: Uncaught exception in thread
appclient-registration-retry-thread
org.apache.spark.SparkException: Error communicating with MapOutputTracker
at org.apache.spark.MapOutputTracker.askTracker(MapOutputTracker.scala:114)
at org.apache.spark.MapOutputTracker.sendTracker(MapOutputTracker.scala:120)
at org.apache.spark.MapOutputTrackerMaster.stop(MapOutputTracker.scala:462)
at org.apache.spark.SparkEnv.stop(SparkEnv.scala:93)
at
org.apache.spark.SparkContext$$anonfun$stop$12.apply$mcV$sp(SparkContext.scala:1756)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1229)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1755)
at
org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend.dead(SparkDeploySchedulerBackend.scala:127)
at
org.apache.spark.deploy.client.AppClient$ClientEndpoint.markDead(AppClient.scala:264)
at
org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2$$anonfun$run$1.apply$mcV$sp(AppClient.scala:134)
at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1163)
at
org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2.run(AppClient.scala:129)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.InterruptedException
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1325)
at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:208)
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
at
scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:107)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77)
at org.apache.spark.MapOutputTracker.askTracker(MapOutputTracker.scala:110)
... 18 more
16/01/06 01:50:29 INFO spark.SparkContext: Successfully stopped SparkContext

2016년 1월 6일 (수) 오전 4:53, Amos B. Elberg <am...@me.com>님이 작성:

> Moon - I don’t believe that PR actually works.
>
> From: moon soo Lee <mo...@apache.org> <mo...@apache.org>
> Reply: users@zeppelin.incubator.apache.org
> <us...@zeppelin.incubator.apache.org>
> <us...@zeppelin.incubator.apache.org>
> Date: January 5, 2016 at 2:51:14 PM
> To: users@zeppelin.incubator.apache.org
> <us...@zeppelin.incubator.apache.org>
> <us...@zeppelin.incubator.apache.org>
> Subject:  Re: Spark 1.6.0?
>
> Hi Boris,
>
> There is pullrequest that supports Spark 1.6.0
> https://github.com/apache/incubator-zeppelin/pull/463. It's not merged
> yet. You may need to apply manually until it is get merged into master.
>
> Thanks,
> moon
>
> On Tue, Jan 5, 2016 at 11:38 AM Boris Schminke <sc...@gmail.com>
> wrote:
>
>> Hi,
>> why can't I use Zeppelin with Spark 1.6.0?
>> Probably I could do it in developers' 0.6 version compiled from sources,
>> couldn't I?
>>
>> Regards,
>> Boris
>>
>

Re: Spark 1.6.0?

Posted by "Amos B. Elberg" <am...@me.com>.
Moon - I don’t believe that PR actually works.  

From: moon soo Lee <mo...@apache.org>
Reply: users@zeppelin.incubator.apache.org <us...@zeppelin.incubator.apache.org>
Date: January 5, 2016 at 2:51:14 PM
To: users@zeppelin.incubator.apache.org <us...@zeppelin.incubator.apache.org>
Subject:  Re: Spark 1.6.0?  

Hi Boris,

There is pullrequest that supports Spark 1.6.0 https://github.com/apache/incubator-zeppelin/pull/463. It's not merged yet. You may need to apply manually until it is get merged into master.

Thanks,
moon

On Tue, Jan 5, 2016 at 11:38 AM Boris Schminke <sc...@gmail.com> wrote:
Hi,
why can't I use Zeppelin with Spark 1.6.0?
Probably I could do it in developers' 0.6 version compiled from sources, couldn't I?

Regards,
Boris

Re: Spark 1.6.0?

Posted by moon soo Lee <mo...@apache.org>.
Hi Boris,

There is pullrequest that supports Spark 1.6.0
https://github.com/apache/incubator-zeppelin/pull/463. It's not merged yet.
You may need to apply manually until it is get merged into master.

Thanks,
moon

On Tue, Jan 5, 2016 at 11:38 AM Boris Schminke <sc...@gmail.com> wrote:

> Hi,
> why can't I use Zeppelin with Spark 1.6.0?
> Probably I could do it in developers' 0.6 version compiled from sources,
> couldn't I?
>
> Regards,
> Boris
>