You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by tgravescs <gi...@git.apache.org> on 2015/12/01 22:51:11 UTC

[GitHub] spark pull request: [SPARK-10911] Executors should System.exit on ...

Github user tgravescs commented on the pull request:

    https://github.com/apache/spark/pull/9946#issuecomment-161106779
  
    @srowen do you know of any actual use cases this will break?   We've finished running the user code and are exiting anyway so things should just be shutting down.  At this point it shouldn't be committing anything and under normal circumstances if it takes to long its going to get shot anyway.
    
     If we know it will affect stuff I'm fine with leaving it out because under normal circumstances the cluster manager handles removing these. But this particular condition happened when there was an issue with the cluster manager as well.  Leaving around tasks is bad and anything we can do to protect from users in my opinion is good.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org