You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@zeppelin.apache.org by Ruslan Dautkhanov <da...@gmail.com> on 2017/05/12 22:45:27 UTC

restarting spark interpreter while its running results

Restarting spark interpreter while a spark paragraph is running results
in broken Zeppelin state:
- popup window that show that the spark interpreter is restarting never
closes (spinning);
- refreshing browser window - shows [2] - all interpreters "disappear"
- attemp to start any spark paragraphs (to lazily start spark interpreter
back) shows exception [2].

That's a regression since ~February snapshot of Zeppelin master branch.

Very easy to reproduce - just try to restart spark interpreter while one of
its paragraphs running.

Has anyone noticed that too?


Thanks,
Ruslan



[1]
[image: Inline image 1]

[2]

org.apache.spark.SparkException: Job 0 cancelled part of cancelled job
> group zeppelin-2C9A8J2AC-20170512-160255_260939781 at
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
> at
> org.apache.spark.scheduler.DAGScheduler.handleJobCancellation(DAGScheduler.scala:1375)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleJobGroupCancelled$1.apply$mcVI$sp(DAGScheduler.scala:788)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleJobGroupCancelled$1.apply(DAGScheduler.scala:788)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleJobGroupCancelled$1.apply(DAGScheduler.scala:788)
> at scala.collection.mutable.HashSet.foreach(HashSet.scala:78) at
> org.apache.spark.scheduler.DAGScheduler.handleJobGroupCancelled(DAGScheduler.scala:788)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1625)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
> at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) at
> org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628) at
> org.apache.spark.SparkContext.runJob(SparkContext.scala:1918) at
> org.apache.spark.SparkContext.runJob(SparkContext.scala:1931) at
> org.apache.spark.SparkContext.runJob(SparkContext.scala:1944) at
> org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:333)
> at
> org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
> at
> org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2371)
> at
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
> at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2765) at
> org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2370)
> at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2377)
> at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2113)
> at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2112)
> at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2795) at
> org.apache.spark.sql.Dataset.head(Dataset.scala:2112) at
> org.apache.spark.sql.Dataset.take(Dataset.scala:2327) at
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606) at
> org.apache.zeppelin.spark.SparkZeppelinContext.showData(SparkZeppelinContext.java:111)
> at
> org.apache.zeppelin.spark.SparkSqlInterpreter.interpret(SparkSqlInterpreter.java:134)
> at
> org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:101)
> at
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:500)
> at org.apache.zeppelin.scheduler.Job.run(Job.java:181) at
> org.apache.zeppelin.scheduler.ParallelScheduler$JobRunner.run(ParallelScheduler.java:162)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)



Thanks,
Ruslan