You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/07/29 20:28:28 UTC

[GitHub] [spark] squito commented on issue #24462: [SPARK-26268][CORE] Do not resubmit tasks when executors are lost

squito commented on issue #24462: [SPARK-26268][CORE] Do not resubmit tasks when executors are lost
URL: https://github.com/apache/spark/pull/24462#issuecomment-516149439
 
 
   why do you want to store the data files on hdfs, but the index files on the executors?  This seems to have the worst of both worlds -- the (bad) resiliency of local storage, and the (bad) performance of remote reads.  Or is the index file backed up somewhere as well?
   
   I definitely understand the general problem with knowing what to do about shuffle data when an executor is lost.  So I understand why you want to do something *like* what this PR does.  But it probably makes more sense to address this as part of the other shuffle api changes, if possible, not as another config.
   
   You might not be able to do everything you want -- in particular, the new api does not support multiple locations for shuffle data.  We decided that was out-of-scope, for now (but maybe a future enhancement).  Is that what you're looking for -- one copy on the executors local disk, and another copy on hdfs?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org