You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Thomas Graves (Jira)" <ji...@apache.org> on 2019/11/11 18:51:00 UTC

[jira] [Commented] (SPARK-29762) GPU Scheduling - default task resource amount to 1

    [ https://issues.apache.org/jira/browse/SPARK-29762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16971798#comment-16971798 ] 

Thomas Graves commented on SPARK-29762:
---------------------------------------

this is actually more complex then you might think because the resource configs are just configs.  So you have spark.executor.resource.gpu.amount for instance, the corresponding task config would be spark.task.resource.gpu.amount.  Where gpu could be any resource.  The way the code is written now if it just grabs all the resources and iterates over them in various places and assume you have specified a task requirement for each executor resource. 

If you remove that assumption you now have to be careful about what you are iterating over and really you have to use the resources from the executor configs, not the task configs.  But you still have to read the task configs and if a resource isn't there then default it to 1.

> GPU Scheduling - default task resource amount to 1
> --------------------------------------------------
>
>                 Key: SPARK-29762
>                 URL: https://issues.apache.org/jira/browse/SPARK-29762
>             Project: Spark
>          Issue Type: Story
>          Components: Spark Core
>    Affects Versions: 3.0.0
>            Reporter: Thomas Graves
>            Priority: Major
>
> Default the task level resource configs (for gpu/fpga, etc) to 1.  So if the user specifies the executor resource then to make it more user friendly lets have the task resource config default to 1.  This is ok right now since we require resources to have an address.  It also matches what we do for the spark.task.cpus configs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org