You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Chanh Le <gi...@gmail.com> on 2016/07/11 08:09:52 UTC

Zeppelin Spark with Dynamic Allocation

Hi everybody,
I am testing zeppelin with dynamic allocation but seem it’s not working.







Logs I received I saw that Spark Context was created successfully and task was running but after that was terminated.
Any ideas on that?
Thanks.



 INFO [2016-07-11 15:03:40,096] ({Thread-0} RemoteInterpreterServer.java[run]:81) - Starting remote interpreter server on port 24994
 INFO [2016-07-11 15:03:40,471] ({pool-1-thread-2} RemoteInterpreterServer.java[createInterpreter]:169) - Instantiate interpreter org.apache.zeppelin.spark.SparkInterpreter
 INFO [2016-07-11 15:03:40,521] ({pool-1-thread-2} RemoteInterpreterServer.java[createInterpreter]:169) - Instantiate interpreter org.apache.zeppelin.spark.PySparkInterpreter
 INFO [2016-07-11 15:03:40,526] ({pool-1-thread-2} RemoteInterpreterServer.java[createInterpreter]:169) - Instantiate interpreter org.apache.zeppelin.spark.SparkRInterpreter
 INFO [2016-07-11 15:03:40,528] ({pool-1-thread-2} RemoteInterpreterServer.java[createInterpreter]:169) - Instantiate interpreter org.apache.zeppelin.spark.SparkSqlInterpreter
 INFO [2016-07-11 15:03:40,531] ({pool-1-thread-2} RemoteInterpreterServer.java[createInterpreter]:169) - Instantiate interpreter org.apache.zeppelin.spark.DepInterpreter
 INFO [2016-07-11 15:03:40,563] ({pool-2-thread-5} SchedulerFactory.java[jobStarted]:131) - Job remoteInterpretJob_1468224220562 started by scheduler org.apache.zeppelin.spark.SparkInterpreter998491254
 WARN [2016-07-11 15:03:41,559] ({pool-2-thread-5} NativeCodeLoader.java[<clinit>]:62) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
 INFO [2016-07-11 15:03:41,703] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Changing view acls to: root
 INFO [2016-07-11 15:03:41,704] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Changing modify acls to: root
 INFO [2016-07-11 15:03:41,708] ({pool-2-thread-5} Logging.scala[logInfo]:58) - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
 INFO [2016-07-11 15:03:41,977] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Starting HTTP Server
 INFO [2016-07-11 15:03:42,029] ({pool-2-thread-5} Server.java[doStart]:272) - jetty-8.y.z-SNAPSHOT
 INFO [2016-07-11 15:03:42,047] ({pool-2-thread-5} AbstractConnector.java[doStart]:338) - Started SocketConnector@0.0.0.0:53313
 INFO [2016-07-11 15:03:42,048] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Successfully started service 'HTTP class server' on port 53313.
 INFO [2016-07-11 15:03:43,978] ({pool-2-thread-5} SparkInterpreter.java[createSparkContext]:233) - ------ Create new SparkContext mesos://zk://master1:2181,master2:2181,master3:2181/mesos -------
 INFO [2016-07-11 15:03:44,003] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Running Spark version 1.6.1
 INFO [2016-07-11 15:03:44,036] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Changing view acls to: root
 INFO [2016-07-11 15:03:44,036] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Changing modify acls to: root
 INFO [2016-07-11 15:03:44,037] ({pool-2-thread-5} Logging.scala[logInfo]:58) - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
 INFO [2016-07-11 15:03:44,231] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Successfully started service 'sparkDriver' on port 33913.
 INFO [2016-07-11 15:03:44,552] ({sparkDriverActorSystem-akka.actor.default-dispatcher-4} Slf4jLogger.scala[applyOrElse]:80) - Slf4jLogger started
 INFO [2016-07-11 15:03:44,597] ({sparkDriverActorSystem-akka.actor.default-dispatcher-4} Slf4jLogger.scala[apply$mcV$sp]:74) - Starting remoting
 INFO [2016-07-11 15:03:44,754] ({sparkDriverActorSystem-akka.actor.default-dispatcher-4} Slf4jLogger.scala[apply$mcV$sp]:74) - Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@10.197.0.3:55213]
 INFO [2016-07-11 15:03:44,760] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Successfully started service 'sparkDriverActorSystem' on port 55213.
 INFO [2016-07-11 15:03:44,771] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Registering MapOutputTracker
 INFO [2016-07-11 15:03:44,789] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Registering BlockManagerMaster
 INFO [2016-07-11 15:03:44,802] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Created local directory at /data/tmp/blockmgr-14f6a013-abb7-4b46-adab-07282b03c7e4
 INFO [2016-07-11 15:03:44,808] ({pool-2-thread-5} Logging.scala[logInfo]:58) - MemoryStore started with capacity 511.1 MB
 INFO [2016-07-11 15:03:44,871] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Registering OutputCommitCoordinator
 INFO [2016-07-11 15:03:45,018] ({pool-2-thread-5} Server.java[doStart]:272) - jetty-8.y.z-SNAPSHOT
 INFO [2016-07-11 15:03:45,031] ({pool-2-thread-5} AbstractConnector.java[doStart]:338) - Started SelectChannelConnector@0.0.0.0:4040
 INFO [2016-07-11 15:03:45,033] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Successfully started service 'SparkUI' on port 4040.
 INFO [2016-07-11 15:03:45,035] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Started SparkUI at http://10.197.0.3:4040
 INFO [2016-07-11 15:03:45,059] ({pool-2-thread-5} Logging.scala[logInfo]:58) - HTTP File server directory is /data/tmp/spark-4467170d-a139-4fd4-8628-90820e349760/httpd-b7104d70-f7fb-400c-b1e6-49ec36b7c99e
 INFO [2016-07-11 15:03:45,059] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Starting HTTP Server
 INFO [2016-07-11 15:03:45,060] ({pool-2-thread-5} Server.java[doStart]:272) - jetty-8.y.z-SNAPSHOT
 INFO [2016-07-11 15:03:45,063] ({pool-2-thread-5} AbstractConnector.java[doStart]:338) - Started SocketConnector@0.0.0.0:21210
 INFO [2016-07-11 15:03:45,064] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Successfully started service 'HTTP file server' on port 21210.
 INFO [2016-07-11 15:03:45,099] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Added JAR file:/home/spark/log_analyzer/alluxio-core-client-spark-1.1.0-jar-with-dependencies.jar at http://10.197.0.3:21210/jars/alluxio-core-client-spark-1.1.0-jar-with-dependencies.jar with timestamp 1468224225098
 INFO [2016-07-11 15:03:45,116] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Added JAR file:/home/spark/zeppelin-current/interpreter/spark/zeppelin-spark-0.6.0.jar at http://10.197.0.3:21210/jars/zeppelin-spark-0.6.0.jar with timestamp 1468224225116
 INFO [2016-07-11 15:03:45,206] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Created default pool default, schedulingMode: FIFO, minShare: 0, weight: 1
 INFO [2016-07-11 15:03:45,286] ({Thread-37} Logging.scala[logInfo]:58) - Registered as framework ID 90694c50-1759-455b-9034-77a85e3bcab7-0048
 INFO [2016-07-11 15:03:45,294] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 37086.
 INFO [2016-07-11 15:03:45,294] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Server created on 37086
 INFO [2016-07-11 15:03:45,297] ({pool-2-thread-5} Logging.scala[logInfo]:58) - external shuffle service port = 7337
 INFO [2016-07-11 15:03:45,298] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Trying to register BlockManager
 INFO [2016-07-11 15:03:45,301] ({dispatcher-event-loop-2} Logging.scala[logInfo]:58) - Registering block manager 10.197.0.3:37086 with 511.1 MB RAM, BlockManagerId(driver, 10.197.0.3, 37086)
 INFO [2016-07-11 15:03:45,303] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Registered BlockManager
 INFO [2016-07-11 15:03:45,414] ({Thread-39} Logging.scala[logInfo]:58) - Mesos task 3 is now TASK_RUNNING
 INFO [2016-07-11 15:03:45,465] ({CoarseMesosSchedulerBackend-mesos-driver} Logging.scala[logInfo]:58) - driver.run() returned with code DRIVER_ABORTED
 INFO [2016-07-11 15:03:45,469] ({Thread-2} Logging.scala[logInfo]:58) - Shutdown hook called
 INFO [2016-07-11 15:03:45,477] ({Thread-2} Logging.scala[logInfo]:58) - Shutdown hook called
 INFO [2016-07-11 15:03:45,478] ({Thread-2} Logging.scala[logInfo]:58) - Deleting directory /tmp/spark-e23ac5cc-25b0-4a98-ab37-54073ae58a7b
 INFO [2016-07-11 15:03:45,479] ({Thread-2} Logging.scala[logInfo]:58) - Deleting directory /data/tmp/spark-4467170d-a139-4fd4-8628-90820e349760
 INFO [2016-07-11 15:03:45,486] ({Thread-2} Logging.scala[logInfo]:58) - Deleting directory /data/tmp/spark-4467170d-a139-4fd4-8628-90820e349760/httpd-b7104d70-f7fb-400c-b1e6-49ec36b7c99e
 INFO [2016-07-11 15:03:45,486] ({Thread-2} Logging.scala[logInfo]:58) - Deleting directory /data/tmp/spark-4467170d-a139-4fd4-8628-90820e349760/userFiles-88d77298-a5c8-4d7f-953c-bfd5b973fb9f

Re: Zeppelin Spark with Dynamic Allocation

Posted by Chanh Le <gi...@gmail.com>.
Hi Tamas,
I am using Spark 1.6.1.





> On Jul 11, 2016, at 3:24 PM, Tamas Szuromi <ta...@odigeo.com> wrote:
> 
> Hello,
> 
> What spark version do you use? I have the same issue with Spark 1.6.1 and there is a ticket somewhere.
> 
> cheers,
> 
> 
> 
> 
> Tamas Szuromi
> Data Analyst
> Skype: tromika
> E-mail: tamas.szuromi@odigeo.com <ma...@odigeo.com>
> 
> ODIGEO Hungary Kft.
> 1066 Budapest
> Weiner Leó u. 16.
> www.liligo.com  <http://www.liligo.com/>
> check out our newest video  <http://www.youtube.com/user/liligo>
> 
> 
> 
> On 11 July 2016 at 10:09, Chanh Le <giaosudau@gmail.com <ma...@gmail.com>> wrote:
> Hi everybody,
> I am testing zeppelin with dynamic allocation but seem it’s not working.
> 
> <Screen Shot 2016-07-11 at 3.07.37 PM.png>
> 
> 
> 
> 
> 
> Logs I received I saw that Spark Context was created successfully and task was running but after that was terminated.
> Any ideas on that?
> Thanks.
> 
> 
> 
>  INFO [2016-07-11 15:03:40,096] ({Thread-0} RemoteInterpreterServer.java[run]:81) - Starting remote interpreter server on port 24994
>  INFO [2016-07-11 15:03:40,471] ({pool-1-thread-2} RemoteInterpreterServer.java[createInterpreter]:169) - Instantiate interpreter org.apache.zeppelin.spark.SparkInterpreter
>  INFO [2016-07-11 15:03:40,521] ({pool-1-thread-2} RemoteInterpreterServer.java[createInterpreter]:169) - Instantiate interpreter org.apache.zeppelin.spark.PySparkInterpreter
>  INFO [2016-07-11 15:03:40,526] ({pool-1-thread-2} RemoteInterpreterServer.java[createInterpreter]:169) - Instantiate interpreter org.apache.zeppelin.spark.SparkRInterpreter
>  INFO [2016-07-11 15:03:40,528] ({pool-1-thread-2} RemoteInterpreterServer.java[createInterpreter]:169) - Instantiate interpreter org.apache.zeppelin.spark.SparkSqlInterpreter
>  INFO [2016-07-11 15:03:40,531] ({pool-1-thread-2} RemoteInterpreterServer.java[createInterpreter]:169) - Instantiate interpreter org.apache.zeppelin.spark.DepInterpreter
>  INFO [2016-07-11 15:03:40,563] ({pool-2-thread-5} SchedulerFactory.java[jobStarted]:131) - Job remoteInterpretJob_1468224220562 started by scheduler org.apache.zeppelin.spark.SparkInterpreter998491254
>  WARN [2016-07-11 15:03:41,559] ({pool-2-thread-5} NativeCodeLoader.java[<clinit>]:62) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
>  INFO [2016-07-11 15:03:41,703] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Changing view acls to: root
>  INFO [2016-07-11 15:03:41,704] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Changing modify acls to: root
>  INFO [2016-07-11 15:03:41,708] ({pool-2-thread-5} Logging.scala[logInfo]:58) - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
>  INFO [2016-07-11 15:03:41,977] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Starting HTTP Server
>  INFO [2016-07-11 15:03:42,029] ({pool-2-thread-5} Server.java[doStart]:272) - jetty-8.y.z-SNAPSHOT
>  INFO [2016-07-11 15:03:42,047] ({pool-2-thread-5} AbstractConnector.java[doStart]:338) - Started SocketConnector@0.0.0.0 <ma...@0.0.0.0>:53313
>  INFO [2016-07-11 15:03:42,048] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Successfully started service 'HTTP class server' on port 53313.
>  INFO [2016-07-11 15:03:43,978] ({pool-2-thread-5} SparkInterpreter.java[createSparkContext]:233) - ------ Create new SparkContext mesos://zk://master1:2181,master2:2181,master3:2181/mesos <> -------
>  INFO [2016-07-11 15:03:44,003] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Running Spark version 1.6.1
>  INFO [2016-07-11 15:03:44,036] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Changing view acls to: root
>  INFO [2016-07-11 15:03:44,036] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Changing modify acls to: root
>  INFO [2016-07-11 15:03:44,037] ({pool-2-thread-5} Logging.scala[logInfo]:58) - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
>  INFO [2016-07-11 15:03:44,231] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Successfully started service 'sparkDriver' on port 33913.
>  INFO [2016-07-11 15:03:44,552] ({sparkDriverActorSystem-akka.actor.default-dispatcher-4} Slf4jLogger.scala[applyOrElse]:80) - Slf4jLogger started
>  INFO [2016-07-11 15:03:44,597] ({sparkDriverActorSystem-akka.actor.default-dispatcher-4} Slf4jLogger.scala[apply$mcV$sp]:74) - Starting remoting
>  INFO [2016-07-11 15:03:44,754] ({sparkDriverActorSystem-akka.actor.default-dispatcher-4} Slf4jLogger.scala[apply$mcV$sp]:74) - Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@10.197.0.3:55213 <>]
>  INFO [2016-07-11 15:03:44,760] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Successfully started service 'sparkDriverActorSystem' on port 55213.
>  INFO [2016-07-11 15:03:44,771] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Registering MapOutputTracker
>  INFO [2016-07-11 15:03:44,789] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Registering BlockManagerMaster
>  INFO [2016-07-11 15:03:44,802] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Created local directory at /data/tmp/blockmgr-14f6a013-abb7-4b46-adab-07282b03c7e4
>  INFO [2016-07-11 15:03:44,808] ({pool-2-thread-5} Logging.scala[logInfo]:58) - MemoryStore started with capacity 511.1 MB
>  INFO [2016-07-11 15:03:44,871] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Registering OutputCommitCoordinator
>  INFO [2016-07-11 15:03:45,018] ({pool-2-thread-5} Server.java[doStart]:272) - jetty-8.y.z-SNAPSHOT
>  INFO [2016-07-11 15:03:45,031] ({pool-2-thread-5} AbstractConnector.java[doStart]:338) - Started SelectChannelConnector@0.0.0.0 <ma...@0.0.0.0>:4040
>  INFO [2016-07-11 15:03:45,033] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Successfully started service 'SparkUI' on port 4040.
>  INFO [2016-07-11 15:03:45,035] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Started SparkUI at http://10.197.0.3:4040 <http://10.197.0.3:4040/>
>  INFO [2016-07-11 15:03:45,059] ({pool-2-thread-5} Logging.scala[logInfo]:58) - HTTP File server directory is /data/tmp/spark-4467170d-a139-4fd4-8628-90820e349760/httpd-b7104d70-f7fb-400c-b1e6-49ec36b7c99e
>  INFO [2016-07-11 15:03:45,059] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Starting HTTP Server
>  INFO [2016-07-11 15:03:45,060] ({pool-2-thread-5} Server.java[doStart]:272) - jetty-8.y.z-SNAPSHOT
>  INFO [2016-07-11 15:03:45,063] ({pool-2-thread-5} AbstractConnector.java[doStart]:338) - Started SocketConnector@0.0.0.0 <ma...@0.0.0.0>:21210
>  INFO [2016-07-11 15:03:45,064] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Successfully started service 'HTTP file server' on port 21210.
>  INFO [2016-07-11 15:03:45,099] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Added JAR file:/home/spark/log_analyzer/alluxio-core-client-spark-1.1.0-jar-with-dependencies.jar at http://10.197.0.3:21210/jars/alluxio-core-client-spark-1.1.0-jar-with-dependencies.jar <http://10.197.0.3:21210/jars/alluxio-core-client-spark-1.1.0-jar-with-dependencies.jar> with timestamp 1468224225098
>  INFO [2016-07-11 15:03:45,116] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Added JAR file:/home/spark/zeppelin-current/interpreter/spark/zeppelin-spark-0.6.0.jar at http://10.197.0.3:21210/jars/zeppelin-spark-0.6.0.jar <http://10.197.0.3:21210/jars/zeppelin-spark-0.6.0.jar> with timestamp 1468224225116
>  INFO [2016-07-11 15:03:45,206] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Created default pool default, schedulingMode: FIFO, minShare: 0, weight: 1
>  INFO [2016-07-11 15:03:45,286] ({Thread-37} Logging.scala[logInfo]:58) - Registered as framework ID 90694c50-1759-455b-9034-77a85e3bcab7-0048
>  INFO [2016-07-11 15:03:45,294] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 37086.
>  INFO [2016-07-11 15:03:45,294] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Server created on 37086
>  INFO [2016-07-11 15:03:45,297] ({pool-2-thread-5} Logging.scala[logInfo]:58) - external shuffle service port = 7337
>  INFO [2016-07-11 15:03:45,298] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Trying to register BlockManager
>  INFO [2016-07-11 15:03:45,301] ({dispatcher-event-loop-2} Logging.scala[logInfo]:58) - Registering block manager 10.197.0.3:37086 <http://10.197.0.3:37086/> with 511.1 MB RAM, BlockManagerId(driver, 10.197.0.3, 37086)
>  INFO [2016-07-11 15:03:45,303] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Registered BlockManager
>  INFO [2016-07-11 15:03:45,414] ({Thread-39} Logging.scala[logInfo]:58) - Mesos task 3 is now TASK_RUNNING
>  INFO [2016-07-11 15:03:45,465] ({CoarseMesosSchedulerBackend-mesos-driver} Logging.scala[logInfo]:58) - driver.run() returned with code DRIVER_ABORTED
>  INFO [2016-07-11 15:03:45,469] ({Thread-2} Logging.scala[logInfo]:58) - Shutdown hook called
>  INFO [2016-07-11 15:03:45,477] ({Thread-2} Logging.scala[logInfo]:58) - Shutdown hook called
>  INFO [2016-07-11 15:03:45,478] ({Thread-2} Logging.scala[logInfo]:58) - Deleting directory /tmp/spark-e23ac5cc-25b0-4a98-ab37-54073ae58a7b
>  INFO [2016-07-11 15:03:45,479] ({Thread-2} Logging.scala[logInfo]:58) - Deleting directory /data/tmp/spark-4467170d-a139-4fd4-8628-90820e349760
>  INFO [2016-07-11 15:03:45,486] ({Thread-2} Logging.scala[logInfo]:58) - Deleting directory /data/tmp/spark-4467170d-a139-4fd4-8628-90820e349760/httpd-b7104d70-f7fb-400c-b1e6-49ec36b7c99e
>  INFO [2016-07-11 15:03:45,486] ({Thread-2} Logging.scala[logInfo]:58) - Deleting directory /data/tmp/spark-4467170d-a139-4fd4-8628-90820e349760/userFiles-88d77298-a5c8-4d7f-953c-bfd5b973fb9f
> 


Re: Zeppelin Spark with Dynamic Allocation

Posted by Tamas Szuromi <ta...@odigeo.com.INVALID>.
Hello,

What spark version do you use? I have the same issue with Spark 1.6.1 and
there is a ticket somewhere.

cheers,




Tamas Szuromi

Data Analyst

*Skype: *tromika
*E-mail: *tamas.szuromi@odigeo.com <na...@odigeo.com>

[image: ODIGEO Hungary]

ODIGEO Hungary Kft.
1066 Budapest
Weiner Leó u. 16.

www.liligo.com  <http://www.liligo.com/>
check out our newest video  <http://www.youtube.com/user/liligo>



On 11 July 2016 at 10:09, Chanh Le <gi...@gmail.com> wrote:

> Hi everybody,
> I am testing zeppelin with dynamic allocation but seem it’s not working.
>
>
>
>
>
>
> Logs I received I saw that Spark Context was created successfully and task
> was running but after that was terminated.
> Any ideas on that?
> Thanks.
>
>
>
>  INFO [2016-07-11 15:03:40,096] ({Thread-0}
> RemoteInterpreterServer.java[run]:81) - Starting remote interpreter server
> on port 24994
>  INFO [2016-07-11 15:03:40,471] ({pool-1-thread-2}
> RemoteInterpreterServer.java[createInterpreter]:169) - Instantiate
> interpreter org.apache.zeppelin.spark.SparkInterpreter
>  INFO [2016-07-11 15:03:40,521] ({pool-1-thread-2}
> RemoteInterpreterServer.java[createInterpreter]:169) - Instantiate
> interpreter org.apache.zeppelin.spark.PySparkInterpreter
>  INFO [2016-07-11 15:03:40,526] ({pool-1-thread-2}
> RemoteInterpreterServer.java[createInterpreter]:169) - Instantiate
> interpreter org.apache.zeppelin.spark.SparkRInterpreter
>  INFO [2016-07-11 15:03:40,528] ({pool-1-thread-2}
> RemoteInterpreterServer.java[createInterpreter]:169) - Instantiate
> interpreter org.apache.zeppelin.spark.SparkSqlInterpreter
>  INFO [2016-07-11 15:03:40,531] ({pool-1-thread-2}
> RemoteInterpreterServer.java[createInterpreter]:169) - Instantiate
> interpreter org.apache.zeppelin.spark.DepInterpreter
>  INFO [2016-07-11 15:03:40,563] ({pool-2-thread-5}
> SchedulerFactory.java[jobStarted]:131) - Job
> remoteInterpretJob_1468224220562 started by scheduler
> org.apache.zeppelin.spark.SparkInterpreter998491254
>  WARN [2016-07-11 15:03:41,559] ({pool-2-thread-5}
> NativeCodeLoader.java[<clinit>]:62) - Unable to load native-hadoop library
> for your platform... using builtin-java classes where applicable
>  INFO [2016-07-11 15:03:41,703] ({pool-2-thread-5}
> Logging.scala[logInfo]:58) - Changing view acls to: root
>  INFO [2016-07-11 15:03:41,704] ({pool-2-thread-5}
> Logging.scala[logInfo]:58) - Changing modify acls to: root
>  INFO [2016-07-11 15:03:41,708] ({pool-2-thread-5}
> Logging.scala[logInfo]:58) - SecurityManager: authentication disabled; ui
> acls disabled; users with view permissions: Set(root); users with modify
> permissions: Set(root)
>  INFO [2016-07-11 15:03:41,977] ({pool-2-thread-5}
> Logging.scala[logInfo]:58) - Starting HTTP Server
>  INFO [2016-07-11 15:03:42,029] ({pool-2-thread-5}
> Server.java[doStart]:272) - jetty-8.y.z-SNAPSHOT
>  INFO [2016-07-11 15:03:42,047] ({pool-2-thread-5}
> AbstractConnector.java[doStart]:338) - Started SocketConnector@0.0.0.0
> :53313
>  INFO [2016-07-11 15:03:42,048] ({pool-2-thread-5}
> Logging.scala[logInfo]:58) - Successfully started service 'HTTP class
> server' on port 53313.
> * INFO [2016-07-11 15:03:43,978] ({pool-2-thread-5}
> SparkInterpreter.java[createSparkContext]:233) - ------ Create new
> SparkContext mesos://zk://master1:2181,master2:2181,master3:2181/mesos
> -------*
>  INFO [2016-07-11 15:03:44,003] ({pool-2-thread-5}
> Logging.scala[logInfo]:58) - Running Spark version 1.6.1
>  INFO [2016-07-11 15:03:44,036] ({pool-2-thread-5}
> Logging.scala[logInfo]:58) - Changing view acls to: root
>  INFO [2016-07-11 15:03:44,036] ({pool-2-thread-5}
> Logging.scala[logInfo]:58) - Changing modify acls to: root
>  INFO [2016-07-11 15:03:44,037] ({pool-2-thread-5}
> Logging.scala[logInfo]:58) - SecurityManager: authentication disabled; ui
> acls disabled; users with view permissions: Set(root); users with modify
> permissions: Set(root)
>  INFO [2016-07-11 15:03:44,231] ({pool-2-thread-5}
> Logging.scala[logInfo]:58) - Successfully started service 'sparkDriver' on
> port 33913.
>  INFO [2016-07-11 15:03:44,552]
> ({sparkDriverActorSystem-akka.actor.default-dispatcher-4}
> Slf4jLogger.scala[applyOrElse]:80) - Slf4jLogger started
>  INFO [2016-07-11 15:03:44,597]
> ({sparkDriverActorSystem-akka.actor.default-dispatcher-4}
> Slf4jLogger.scala[apply$mcV$sp]:74) - Starting remoting
>  INFO [2016-07-11 15:03:44,754]
> ({sparkDriverActorSystem-akka.actor.default-dispatcher-4}
> Slf4jLogger.scala[apply$mcV$sp]:74) - Remoting started; listening on
> addresses :[akka.tcp://sparkDriverActorSystem@10.197.0.3:55213]
>  INFO [2016-07-11 15:03:44,760] ({pool-2-thread-5}
> Logging.scala[logInfo]:58) - Successfully started service
> 'sparkDriverActorSystem' on port 55213.
>  INFO [2016-07-11 15:03:44,771] ({pool-2-thread-5}
> Logging.scala[logInfo]:58) - Registering MapOutputTracker
>  INFO [2016-07-11 15:03:44,789] ({pool-2-thread-5}
> Logging.scala[logInfo]:58) - Registering BlockManagerMaster
>  INFO [2016-07-11 15:03:44,802] ({pool-2-thread-5}
> Logging.scala[logInfo]:58) - Created local directory at
> /data/tmp/blockmgr-14f6a013-abb7-4b46-adab-07282b03c7e4
>  INFO [2016-07-11 15:03:44,808] ({pool-2-thread-5}
> Logging.scala[logInfo]:58) - MemoryStore started with capacity 511.1 MB
>  INFO [2016-07-11 15:03:44,871] ({pool-2-thread-5}
> Logging.scala[logInfo]:58) - Registering OutputCommitCoordinator
>  INFO [2016-07-11 15:03:45,018] ({pool-2-thread-5}
> Server.java[doStart]:272) - jetty-8.y.z-SNAPSHOT
>  INFO [2016-07-11 15:03:45,031] ({pool-2-thread-5}
> AbstractConnector.java[doStart]:338) - Started
> SelectChannelConnector@0.0.0.0:4040
>  INFO [2016-07-11 15:03:45,033] ({pool-2-thread-5}
> Logging.scala[logInfo]:58) - Successfully started service 'SparkUI' on port
> 4040.
>  INFO [2016-07-11 15:03:45,035] ({pool-2-thread-5}
> Logging.scala[logInfo]:58) - Started SparkUI at http://10.197.0.3:4040
>  INFO [2016-07-11 15:03:45,059] ({pool-2-thread-5}
> Logging.scala[logInfo]:58) - HTTP File server directory is
> /data/tmp/spark-4467170d-a139-4fd4-8628-90820e349760/httpd-b7104d70-f7fb-400c-b1e6-49ec36b7c99e
>  INFO [2016-07-11 15:03:45,059] ({pool-2-thread-5}
> Logging.scala[logInfo]:58) - Starting HTTP Server
>  INFO [2016-07-11 15:03:45,060] ({pool-2-thread-5}
> Server.java[doStart]:272) - jetty-8.y.z-SNAPSHOT
>  INFO [2016-07-11 15:03:45,063] ({pool-2-thread-5}
> AbstractConnector.java[doStart]:338) - Started SocketConnector@0.0.0.0
> :21210
>  INFO [2016-07-11 15:03:45,064] ({pool-2-thread-5}
> Logging.scala[logInfo]:58) - Successfully started service 'HTTP file
> server' on port 21210.
>  INFO [2016-07-11 15:03:45,099] ({pool-2-thread-5}
> Logging.scala[logInfo]:58) - Added JAR
> file:/home/spark/log_analyzer/alluxio-core-client-spark-1.1.0-jar-with-dependencies.jar
> at
> http://10.197.0.3:21210/jars/alluxio-core-client-spark-1.1.0-jar-with-dependencies.jar
> with timestamp 1468224225098
>  INFO [2016-07-11 15:03:45,116] ({pool-2-thread-5}
> Logging.scala[logInfo]:58) - Added JAR
> file:/home/spark/zeppelin-current/interpreter/spark/zeppelin-spark-0.6.0.jar
> at http://10.197.0.3:21210/jars/zeppelin-spark-0.6.0.jar with timestamp
> 1468224225116
>  INFO [2016-07-11 15:03:45,206] ({pool-2-thread-5}
> Logging.scala[logInfo]:58) - Created default pool default, schedulingMode:
> FIFO, minShare: 0, weight: 1
>  INFO [2016-07-11 15:03:45,286] ({Thread-37} Logging.scala[logInfo]:58) -
> Registered as framework ID 90694c50-1759-455b-9034-77a85e3bcab7-0048
>  INFO [2016-07-11 15:03:45,294] ({pool-2-thread-5}
> Logging.scala[logInfo]:58) - Successfully started service
> 'org.apache.spark.network.netty.NettyBlockTransferService' on port 37086.
>  INFO [2016-07-11 15:03:45,294] ({pool-2-thread-5}
> Logging.scala[logInfo]:58) - Server created on 37086
>  INFO [2016-07-11 15:03:45,297] ({pool-2-thread-5}
> Logging.scala[logInfo]:58) - external shuffle service port = 7337
>  INFO [2016-07-11 15:03:45,298] ({pool-2-thread-5}
> Logging.scala[logInfo]:58) - Trying to register BlockManager
>  INFO [2016-07-11 15:03:45,301] ({dispatcher-event-loop-2}
> Logging.scala[logInfo]:58) - Registering block manager 10.197.0.3:37086
> with 511.1 MB RAM, BlockManagerId(driver, 10.197.0.3, 37086)
>  INFO [2016-07-11 15:03:45,303] ({pool-2-thread-5}
> Logging.scala[logInfo]:58) - Registered BlockManager
> * INFO [2016-07-11 15:03:45,414] ({Thread-39} Logging.scala[logInfo]:58) -
> Mesos task 3 is now TASK_RUNNING*
>  INFO [2016-07-11 15:03:45,465]
> ({CoarseMesosSchedulerBackend-mesos-driver} Logging.scala[logInfo]:58) -
> driver.run() returned with code DRIVER_ABORTED
>  INFO [2016-07-11 15:03:45,469] ({Thread-2} Logging.scala[logInfo]:58) -
> Shutdown hook called
>  INFO [2016-07-11 15:03:45,477] ({Thread-2} Logging.scala[logInfo]:58) -
> Shutdown hook called
>  INFO [2016-07-11 15:03:45,478] ({Thread-2} Logging.scala[logInfo]:58) -
> Deleting directory /tmp/spark-e23ac5cc-25b0-4a98-ab37-54073ae58a7b
>  INFO [2016-07-11 15:03:45,479] ({Thread-2} Logging.scala[logInfo]:58) -
> Deleting directory /data/tmp/spark-4467170d-a139-4fd4-8628-90820e349760
>  INFO [2016-07-11 15:03:45,486] ({Thread-2} Logging.scala[logInfo]:58) -
> Deleting directory
> /data/tmp/spark-4467170d-a139-4fd4-8628-90820e349760/httpd-b7104d70-f7fb-400c-b1e6-49ec36b7c99e
>  INFO [2016-07-11 15:03:45,486] ({Thread-2} Logging.scala[logInfo]:58) -
> Deleting directory
> /data/tmp/spark-4467170d-a139-4fd4-8628-90820e349760/userFiles-88d77298-a5c8-4d7f-953c-bfd5b973fb9f
>