You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2020/11/13 18:07:36 UTC

[GitHub] [spark] mridulm commented on pull request #30370: [SPARK-33446][CORE] Add config spark.executor.coresOverhead

mridulm commented on pull request #30370:
URL: https://github.com/apache/spark/pull/30370#issuecomment-726927955


   I am not sure I understand what the usecase here is.
   
   To answer the jira example:
   Currently we can specify 1 core and 6GB - even if the underlying system is allocating using 3gb/1core.
   
   If the underlying resource allocation is in blocks of 1core/3gb - then it will result in wasting 1 core when requesting for 6gb container. Do we have cases where it only considers cores and not memory ? (The reverse used to be the case in yarn about a decade back - where allocation was only around memory, with cores inferred)
   
   To give a slightly different example - if the memory allocation is in multiples of 1G, asking for 1.5g executor would give you a 2g container : with Xmx set to 1.5g, 'ignoring' the additional 0.5g [1].
   Similarly, asking for a 6g/1core executor should give you a 6g/2core container - where we dont use 1 core.
   
   
   [1] Purposefully ignoring memory overhead for explanation simplicity.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org