You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/04/16 15:32:43 UTC

[GitHub] [spark] squito commented on issue #24374: [SPARK-27366][CORE] Support GPU Resources in Spark job scheduling

squito commented on issue #24374: [SPARK-27366][CORE] Support GPU Resources in Spark job scheduling
URL: https://github.com/apache/spark/pull/24374#issuecomment-483713098
 
 
   One general thought I have -- there seems to be a lot of changes to do general resource tracking, though only gpus are supported here.  These are all internal classes, so I'm wondering whether its useful to even put in those abstractions now.  Is FPGA support (or whatever other special hardware) still years away?  If nobody has at least experimented with it at all, are we sure that the generalizations you're putting in would even be useful in those cases?
   
   I don't really know anything about other accelerators, so I don't have any strong feelings here, just a general concern about putting in abstractions too early.  Just wanted to mention it, I'll leave it up to you.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org