You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Bang Xiao <ch...@gmail.com> on 2017/12/21 07:02:03 UTC

AM restart in a other node makes SparkSQL jobs into a state of feign death

I run "spark-sql  --master yarn --deploy-mode client -f 'SQLs' " in shell, 
The application  is stuck when the AM is down and restart in other nodes. It
seems the driver wait for the next sql. Is this a bug?In my opinion,Either
the application execute the failed sql or exit with a failure when the AM
restart。



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org