You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sujith (JIRA)" <ji...@apache.org> on 2018/08/09 13:48:00 UTC

[jira] [Commented] (SPARK-25073) Spark-submit on Yarn Task : When the yarn.nodemanager.resource.memory-mb and/or yarn.scheduler.maximum-allocation-mb is insufficient, Spark always reports an error request to adjust yarn.scheduler.maximum-allocation-mb

    [ https://issues.apache.org/jira/browse/SPARK-25073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16574864#comment-16574864 ] 

Sujith commented on SPARK-25073:
--------------------------------

Seems to be you are right, Message is bit misleading to the user. As per my understanding there is also dependency in yarn.nodemanager.resource.memory-mb parameter.

*_yarn.nodemanager.resource.memory-mb:_*

Amount of physical memory, in MB, that can be allocated for containers. It means the amount of memory YARN can utilize on this node and therefore this property should be lower then the total memory of that machine.

*_yarn.scheduler.maximum-allocation-mb_*

It defines the maximum memory allocation available for a container in MB, it means RM can only allocate memory to containers in increments of {{"yarn.scheduler.minimum-allocation-mb"}} and not exceed {{"yarn.scheduler.maximum-allocation-mb"}} and It should not be more then total allocated memory of the Node.

 

I will try to analyze more on this and i will raise PR if it requires a fix. Thanks.

 

 

> Spark-submit on Yarn Task : When the yarn.nodemanager.resource.memory-mb and/or yarn.scheduler.maximum-allocation-mb is insufficient, Spark always reports an error request to adjust yarn.scheduler.maximum-allocation-mb
> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-25073
>                 URL: https://issues.apache.org/jira/browse/SPARK-25073
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Submit
>    Affects Versions: 2.3.0, 2.3.1
>            Reporter: vivek kumar
>            Priority: Minor
>
> When the yarn.nodemanager.resource.memory-mb and/or yarn.scheduler.maximum-allocation-mb is insufficient, Spark *always* reports an error request to adjust Yarn.scheduler.maximum-allocation-mb. Expecting the error request to be  more around yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'.
>  
> Scenario 1. yarn.scheduler.maximum-allocation-mb =4g and yarn.nodemanager.resource.memory-mb =8G
>  # Launch shell on Yarn with am.memory less than nodemanager.resource memory but greater than yarn.scheduler.maximum-allocation-mb
> eg; spark-shell --master yarn --conf spark.yarn.am.memory 5g
>  Error: java.lang.IllegalArgumentException: Required AM memory (5120+512 MB) is above the max threshold (4096 MB) of this cluster! Please increase the value of 'yarn.scheduler.maximum-allocation-mb'.
> at org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:325)
>  
> *Scenario 2*. yarn.scheduler.maximum-allocation-mb =15g and yarn.nodemanager.resource.memory-mb =8g
> a. Launch shell on Yarn with am.memory greater than nodemanager.resource memory but less than yarn.scheduler.maximum-allocation-mb
> eg; *spark-shell --master yarn --conf spark.yarn.am.memory=10g*
>  Error :
> java.lang.IllegalArgumentException: Required AM memory (10240+1024 MB) is above the max threshold (*8096 MB*) of this cluster! *Please increase the value of 'yarn.scheduler.maximum-allocation-mb'.*
> at org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:325)
>  
> b. Launch shell on Yarn with am.memory greater than nodemanager.resource memory and yarn.scheduler.maximum-allocation-mb
> eg; *spark-shell --master yarn --conf spark.yarn.am.memory=17g*
>  Error:
> java.lang.IllegalArgumentException: Required AM memory (17408+1740 MB) is above the max threshold (*8096 MB*) of this cluster! *Please increase the value of 'yarn.scheduler.maximum-allocation-mb'.*
> at org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:325)
>  
> *Expected* : Error request for scenario2 should be more around yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org