You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Paul Mogren <PM...@commercehub.com> on 2014/04/11 22:14:55 UTC

Shutdown with streaming driver running in cluster broke master web UI permanently

I had a cluster running with a streaming driver deployed into it. I shut down the cluster using sbin/stop-all.sh. Upon restarting (and restarting, and restarting), the master web UI cannot respond to requests. The cluster seems to be otherwise functional. Below is the master's log, showing stack traces.


pmogren@streamproc01:~/streamproc/spark-0.9.1-bin-hadoop2$ cat /home/pmogren/streamproc/spark-0.9.1-bin-hadoop2/sbin/../logs/spark-pmogren-org.apache.spark.deploy.master.Master-1-streamproc01.outSpark Command: /usr/lib/jvm/java-8-oracle-amd64/bin/java -cp :/home/pmogren/streamproc/spark-0.9.1-bin-hadoop2/conf:/home/pmogren/streamproc/spark-0.9.1-bin-hadoop2/assembly/target/scala-2.10/spark-assembly_2.10-0.9.1-hadoop2.2.0.jar -Dspark.akka.logLifecycleEvents=true -Djava.library.path= -Xms512m -Xmx512m -Dspark.streaming.unpersist=true -Djava.net.preferIPv4Stack=true -Dsun.io.serialization.extendedDebugInfo=true -Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=pubsub01:2181 org.apache.spark.deploy.master.Master --ip 10.10.41.19 --port 7077 --webui-port 8080
========================================

log4j:WARN No appenders could be found for logger (akka.event.slf4j.Slf4jLogger).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
14/04/11 16:07:55 INFO Master: Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
14/04/11 16:07:55 INFO Master: Starting Spark master at spark://10.10.41.19:7077
14/04/11 16:07:55 INFO MasterWebUI: Started Master web UI at http://10.10.41.19:8080
14/04/11 16:07:55 INFO Master: Persisting recovery state to ZooKeeper
14/04/11 16:07:55 INFO ZooKeeper: Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
14/04/11 16:07:55 INFO ZooKeeper: Client environment:host.name=streamproc01.nexus.commercehub.com
14/04/11 16:07:55 INFO ZooKeeper: Client environment:java.version=1.8.0
14/04/11 16:07:55 INFO ZooKeeper: Client environment:java.vendor=Oracle Corporation
14/04/11 16:07:55 INFO ZooKeeper: Client environment:java.home=/usr/lib/jvm/jdk1.8.0/jre
14/04/11 16:07:55 INFO ZooKeeper: Client environment:java.class.path=:/home/pmogren/streamproc/spark-0.9.1-bin-hadoop2/conf:/home/pmogren/streamproc/spark-0.9.1-bin-hadoop2/assembly/target/scala-2.10/spark-assembly_2.10-0.9.1-hadoop2.2.0.jar
14/04/11 16:07:55 INFO ZooKeeper: Client environment:java.library.path=
14/04/11 16:07:55 INFO ZooKeeper: Client environment:java.io.tmpdir=/tmp
14/04/11 16:07:55 INFO ZooKeeper: Client environment:java.compiler=<NA>
14/04/11 16:07:55 INFO ZooKeeper: Client environment:os.name=Linux
14/04/11 16:07:55 INFO ZooKeeper: Client environment:os.arch=amd64
14/04/11 16:07:55 INFO ZooKeeper: Client environment:os.version=3.5.0-23-generic
14/04/11 16:07:55 INFO ZooKeeper: Client environment:user.name=pmogren
14/04/11 16:07:55 INFO ZooKeeper: Client environment:user.home=/home/pmogren
14/04/11 16:07:55 INFO ZooKeeper: Client environment:user.dir=/home/pmogren/streamproc/spark-0.9.1-bin-hadoop2
14/04/11 16:07:55 INFO ZooKeeper: Initiating client connection, connectString=pubsub01:2181 sessionTimeout=30000 watcher=org.apache.spark.deploy.master.SparkZooKeeperSession$ZooKeeperWatcher@744bfbb6
14/04/11 16:07:55 INFO ZooKeeperLeaderElectionAgent: Starting ZooKeeper LeaderElection agent
14/04/11 16:07:55 INFO ZooKeeper: Initiating client connection, connectString=pubsub01:2181 sessionTimeout=30000 watcher=org.apache.spark.deploy.master.SparkZooKeeperSession$ZooKeeperWatcher@7f7e6043
14/04/11 16:07:55 INFO ClientCnxn: Opening socket connection to server pubsub01.nexus.commercehub.com/10.10.40.39:2181. Will not attempt to authenticate using SASL (unknown error)
14/04/11 16:07:55 INFO ClientCnxn: Socket connection established to pubsub01.nexus.commercehub.com/10.10.40.39:2181, initiating session
14/04/11 16:07:55 INFO ClientCnxn: Opening socket connection to server pubsub01.nexus.commercehub.com/10.10.40.39:2181. Will not attempt to authenticate using SASL (unknown error)
14/04/11 16:07:55 WARN ClientCnxnSocket: Connected to an old server; r-o mode will be unavailable
14/04/11 16:07:55 INFO ClientCnxn: Session establishment complete on server pubsub01.nexus.commercehub.com/10.10.40.39:2181, sessionid = 0x14515d9a11300ce, negotiated timeout = 30000
14/04/11 16:07:55 INFO ClientCnxn: Socket connection established to pubsub01.nexus.commercehub.com/10.10.40.39:2181, initiating session
14/04/11 16:07:55 WARN ClientCnxnSocket: Connected to an old server; r-o mode will be unavailable
14/04/11 16:07:55 INFO ClientCnxn: Session establishment complete on server pubsub01.nexus.commercehub.com/10.10.40.39:2181, sessionid = 0x14515d9a11300cf, negotiated timeout = 30000
14/04/11 16:07:55 WARN ZooKeeperLeaderElectionAgent: Cleaning up old ZK master election file that points to this master.
14/04/11 16:07:55 INFO ZooKeeperLeaderElectionAgent: Leader file disappeared, a master is down!
14/04/11 16:07:55 INFO Master: I have been elected leader! New state: RECOVERING
pmogren@streamproc01:~/streamproc/spark-0.9.1-bin-hadoop2$ tail -f /home/pmogren/streamproc/spark-0.9.1-bin-hadoop2/sbin/../logs/spark-pmogren-org.apache.spark.deploy.master.Master-1-streamproc01.out
14/04/11 16:07:55 INFO ClientCnxn: Socket connection established to pubsub01.nexus.commercehub.com/10.10.40.39:2181, initiating session
14/04/11 16:07:55 INFO ClientCnxn: Opening socket connection to server pubsub01.nexus.commercehub.com/10.10.40.39:2181. Will not attempt to authenticate using SASL (unknown error)
14/04/11 16:07:55 WARN ClientCnxnSocket: Connected to an old server; r-o mode will be unavailable
14/04/11 16:07:55 INFO ClientCnxn: Session establishment complete on server pubsub01.nexus.commercehub.com/10.10.40.39:2181, sessionid = 0x14515d9a11300ce, negotiated timeout = 30000
14/04/11 16:07:55 INFO ClientCnxn: Socket connection established to pubsub01.nexus.commercehub.com/10.10.40.39:2181, initiating session
14/04/11 16:07:55 WARN ClientCnxnSocket: Connected to an old server; r-o mode will be unavailable
14/04/11 16:07:55 INFO ClientCnxn: Session establishment complete on server pubsub01.nexus.commercehub.com/10.10.40.39:2181, sessionid = 0x14515d9a11300cf, negotiated timeout = 30000
14/04/11 16:07:55 WARN ZooKeeperLeaderElectionAgent: Cleaning up old ZK master election file that points to this master.
14/04/11 16:07:55 INFO ZooKeeperLeaderElectionAgent: Leader file disappeared, a master is down!
14/04/11 16:07:55 INFO Master: I have been elected leader! New state: RECOVERING
14/04/11 16:08:55 ERROR TaskInvocation:
java.lang.NullPointerException
        at org.apache.spark.deploy.master.Master$$anonfun$completeRecovery$5.apply(Master.scala:418)
        at org.apache.spark.deploy.master.Master$$anonfun$completeRecovery$5.apply(Master.scala:418)
        at scala.collection.TraversableLike$$anonfun$filter$1.apply(TraversableLike.scala:264)
        at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
        at scala.collection.TraversableLike$class.filter(TraversableLike.scala:263)
        at scala.collection.AbstractTraversable.filter(Traversable.scala:105)
        at org.apache.spark.deploy.master.Master.completeRecovery(Master.scala:418)
        at org.apache.spark.deploy.master.Master$$anonfun$receive$1$$anonfun$applyOrElse$1.apply$mcV$sp(Master.scala:160)
        at akka.actor.Scheduler$$anon$11.run(Scheduler.scala:118)
        at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:42)
        at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
14/04/11 16:09:18 WARN AbstractHttpConnection: /
java.lang.NullPointerException
        at org.apache.spark.deploy.master.ui.IndexPage.driverRow(IndexPage.scala:178)
        at org.apache.spark.deploy.master.ui.IndexPage$$anonfun$8.apply(IndexPage.scala:62)
        at org.apache.spark.deploy.master.ui.IndexPage$$anonfun$8.apply(IndexPage.scala:62)
        at org.apache.spark.ui.UIUtils$$anonfun$listingTable$2.apply(UIUtils.scala:134)
        at org.apache.spark.ui.UIUtils$$anonfun$listingTable$2.apply(UIUtils.scala:134)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
        at scala.collection.AbstractTraversable.map(Traversable.scala:105)
        at org.apache.spark.ui.UIUtils$.listingTable(UIUtils.scala:134)
        at org.apache.spark.deploy.master.ui.IndexPage.render(IndexPage.scala:62)
        at org.apache.spark.deploy.master.ui.MasterWebUI$$anonfun$4.apply(MasterWebUI.scala:67)
        at org.apache.spark.deploy.master.ui.MasterWebUI$$anonfun$4.apply(MasterWebUI.scala:67)
        at org.apache.spark.ui.JettyUtils$$anon$1.handle(JettyUtils.scala:61)
        at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1040)
        at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:976)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
        at org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
        at org.eclipse.jetty.server.Server.handle(Server.java:363)
        at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:483)
        at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:920)
        at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:982)
        at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:635)
        at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
        at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
        at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:628)
        at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
        at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
        at java.lang.Thread.run(Thread.java:744)
14/04/11 16:09:19 WARN AbstractHttpConnection: /favicon.ico
java.lang.NullPointerException
        at org.apache.spark.deploy.master.ui.IndexPage.driverRow(IndexPage.scala:178)
        at org.apache.spark.deploy.master.ui.IndexPage$$anonfun$8.apply(IndexPage.scala:62)
        at org.apache.spark.deploy.master.ui.IndexPage$$anonfun$8.apply(IndexPage.scala:62)
        at org.apache.spark.ui.UIUtils$$anonfun$listingTable$2.apply(UIUtils.scala:134)
        at org.apache.spark.ui.UIUtils$$anonfun$listingTable$2.apply(UIUtils.scala:134)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
        at scala.collection.AbstractTraversable.map(Traversable.scala:105)
        at org.apache.spark.ui.UIUtils$.listingTable(UIUtils.scala:134)
        at org.apache.spark.deploy.master.ui.IndexPage.render(IndexPage.scala:62)
        at org.apache.spark.deploy.master.ui.MasterWebUI$$anonfun$4.apply(MasterWebUI.scala:67)
        at org.apache.spark.deploy.master.ui.MasterWebUI$$anonfun$4.apply(MasterWebUI.scala:67)
        at org.apache.spark.ui.JettyUtils$$anon$1.handle(JettyUtils.scala:61)
        at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1040)
        at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:976)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
        at org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
        at org.eclipse.jetty.server.Server.handle(Server.java:363)
        at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:483)
        at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:920)
        at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:982)
        at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:635)
        at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
        at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
        at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:628)
        at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
        at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
        at java.lang.Thread.run(Thread.java:744)

Re: Shutdown with streaming driver running in cluster broke master web UI permanently

Posted by scar scar <sc...@gmail.com>.
Thank you Tathagata,

It is great to know about this issue, but our problem is a little bit
different. We have 3 nodes in our Spark cluster, and when the Zookeeper
leader dies, the Master Spark gets shut down, and remains down, but a new
master gets elected and loads the UI. I think if the problem was the
eventlogging, the new master would have failed as well. Or maybe i am wrong

On Tue, Jun 23, 2015 at 3:00 AM, Tathagata Das <td...@databricks.com> wrote:

> Maybe this is a known issue with spark streaming and master web ui.
> Disable event logging, and it should be fine.
>
> https://issues.apache.org/jira/browse/SPARK-6270
>
> On Mon, Jun 22, 2015 at 8:54 AM, scar scar <sc...@gmail.com> wrote:
>
>> Sorry I was on vacation for a few days. Yes, it is on. This is what I
>> have in the logs:
>>
>> 15/06/22 10:44:00 INFO ClientCnxn: Unable to read additional data from
>> server sessionid 0x14dd82e22f70ef1, likely server has closed socket,
>> closing socket connection and attempting reconnect
>> 15/06/22 10:44:00 INFO ClientCnxn: Unable to read additional data from
>> server sessionid 0x24dc5a319b40090, likely server has closed socket,
>> closing socket connection and attempting reconnect
>> 15/06/22 10:44:01 INFO ConnectionStateManager: State change: SUSPENDED
>> 15/06/22 10:44:01 INFO ConnectionStateManager: State change: SUSPENDED
>> 15/06/22 10:44:01 WARN ConnectionStateManager: There are no
>> ConnectionStateListeners registered.
>> 15/06/22 10:44:01 INFO ZooKeeperLeaderElectionAgent: We have lost
>> leadership
>> 15/06/22 10:44:01 ERROR Master: Leadership has been revoked -- master
>> shutting down.
>>
>>
>> On Thu, Jun 11, 2015 at 8:59 PM, Tathagata Das <td...@databricks.com>
>> wrote:
>>
>>> Do you have the event logging enabled?
>>>
>>> TD
>>>
>>> On Thu, Jun 11, 2015 at 11:24 AM, scar0909 <sc...@gmail.com> wrote:
>>>
>>>> I have the same problem. i realized that the master spark becomes
>>>> unresponsive when we kill the leader zookeeper (of course i assigned the
>>>> leader election task to the zookeeper). please let me know if you have
>>>> any
>>>> devlepments.
>>>>
>>>>
>>>>
>>>> --
>>>> View this message in context:
>>>> http://apache-spark-user-list.1001560.n3.nabble.com/Shutdown-with-streaming-driver-running-in-cluster-broke-master-web-UI-permanently-tp4149p23284.html
>>>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
>>>> For additional commands, e-mail: user-help@spark.apache.org
>>>>
>>>>
>>>
>>
>

Re: Shutdown with streaming driver running in cluster broke master web UI permanently

Posted by Tathagata Das <td...@databricks.com>.
Maybe this is a known issue with spark streaming and master web ui. Disable
event logging, and it should be fine.

https://issues.apache.org/jira/browse/SPARK-6270

On Mon, Jun 22, 2015 at 8:54 AM, scar scar <sc...@gmail.com> wrote:

> Sorry I was on vacation for a few days. Yes, it is on. This is what I have
> in the logs:
>
> 15/06/22 10:44:00 INFO ClientCnxn: Unable to read additional data from
> server sessionid 0x14dd82e22f70ef1, likely server has closed socket,
> closing socket connection and attempting reconnect
> 15/06/22 10:44:00 INFO ClientCnxn: Unable to read additional data from
> server sessionid 0x24dc5a319b40090, likely server has closed socket,
> closing socket connection and attempting reconnect
> 15/06/22 10:44:01 INFO ConnectionStateManager: State change: SUSPENDED
> 15/06/22 10:44:01 INFO ConnectionStateManager: State change: SUSPENDED
> 15/06/22 10:44:01 WARN ConnectionStateManager: There are no
> ConnectionStateListeners registered.
> 15/06/22 10:44:01 INFO ZooKeeperLeaderElectionAgent: We have lost
> leadership
> 15/06/22 10:44:01 ERROR Master: Leadership has been revoked -- master
> shutting down.
>
>
> On Thu, Jun 11, 2015 at 8:59 PM, Tathagata Das <td...@databricks.com>
> wrote:
>
>> Do you have the event logging enabled?
>>
>> TD
>>
>> On Thu, Jun 11, 2015 at 11:24 AM, scar0909 <sc...@gmail.com> wrote:
>>
>>> I have the same problem. i realized that the master spark becomes
>>> unresponsive when we kill the leader zookeeper (of course i assigned the
>>> leader election task to the zookeeper). please let me know if you have
>>> any
>>> devlepments.
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-spark-user-list.1001560.n3.nabble.com/Shutdown-with-streaming-driver-running-in-cluster-broke-master-web-UI-permanently-tp4149p23284.html
>>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
>>> For additional commands, e-mail: user-help@spark.apache.org
>>>
>>>
>>
>

Re: Shutdown with streaming driver running in cluster broke master web UI permanently

Posted by scar scar <sc...@gmail.com>.
Sorry I was on vacation for a few days. Yes, it is on. This is what I have
in the logs:

15/06/22 10:44:00 INFO ClientCnxn: Unable to read additional data from
server sessionid 0x14dd82e22f70ef1, likely server has closed socket,
closing socket connection and attempting reconnect
15/06/22 10:44:00 INFO ClientCnxn: Unable to read additional data from
server sessionid 0x24dc5a319b40090, likely server has closed socket,
closing socket connection and attempting reconnect
15/06/22 10:44:01 INFO ConnectionStateManager: State change: SUSPENDED
15/06/22 10:44:01 INFO ConnectionStateManager: State change: SUSPENDED
15/06/22 10:44:01 WARN ConnectionStateManager: There are no
ConnectionStateListeners registered.
15/06/22 10:44:01 INFO ZooKeeperLeaderElectionAgent: We have lost leadership
15/06/22 10:44:01 ERROR Master: Leadership has been revoked -- master
shutting down.


On Thu, Jun 11, 2015 at 8:59 PM, Tathagata Das <td...@databricks.com> wrote:

> Do you have the event logging enabled?
>
> TD
>
> On Thu, Jun 11, 2015 at 11:24 AM, scar0909 <sc...@gmail.com> wrote:
>
>> I have the same problem. i realized that the master spark becomes
>> unresponsive when we kill the leader zookeeper (of course i assigned the
>> leader election task to the zookeeper). please let me know if you have any
>> devlepments.
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/Shutdown-with-streaming-driver-running-in-cluster-broke-master-web-UI-permanently-tp4149p23284.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
>> For additional commands, e-mail: user-help@spark.apache.org
>>
>>
>

Re: Shutdown with streaming driver running in cluster broke master web UI permanently

Posted by Tathagata Das <td...@databricks.com>.
Do you have the event logging enabled?

TD

On Thu, Jun 11, 2015 at 11:24 AM, scar0909 <sc...@gmail.com> wrote:

> I have the same problem. i realized that the master spark becomes
> unresponsive when we kill the leader zookeeper (of course i assigned the
> leader election task to the zookeeper). please let me know if you have any
> devlepments.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Shutdown-with-streaming-driver-running-in-cluster-broke-master-web-UI-permanently-tp4149p23284.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
>

Re: Shutdown with streaming driver running in cluster broke master web UI permanently

Posted by scar0909 <sc...@gmail.com>.
I have the same problem. i realized that the master spark becomes
unresponsive when we kill the leader zookeeper (of course i assigned the
leader election task to the zookeeper). please let me know if you have any
devlepments.



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Shutdown-with-streaming-driver-running-in-cluster-broke-master-web-UI-permanently-tp4149p23284.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org