You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/12/17 01:09:13 UTC

[GitHub] [spark] vanzin commented on issue #26687: [SPARK-30055][k8s] Allow configuration of restart policy for Kubernetes pods

vanzin commented on issue #26687: [SPARK-30055][k8s] Allow configuration of restart policy for Kubernetes pods
URL: https://github.com/apache/spark/pull/26687#issuecomment-566324547
 
 
   What happens when you restart an executor reusing the same pod, meaning it will have the same configuration as before and thus the same executor ID? Other backends don't do this, rather they allocate new executors, so Spark tends to not behave well if a completely new executor connects back with a recycled executor ID. (It will also make history interesting.)
   
   This sounds a bit risky to me, and the only advantage I'm seeing here is avoiding the resource allocation step in the k8s server.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org