You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/02/19 18:06:07 UTC

[GitHub] clems4ever commented on a change in pull request #23758: [SPARK-17454][MESOS] Use Mesos disk resources for executors.

clems4ever commented on a change in pull request #23758: [SPARK-17454][MESOS] Use Mesos disk resources for executors.
URL: https://github.com/apache/spark/pull/23758#discussion_r258162145
 
 

 ##########
 File path: docs/running-on-mesos.md
 ##########
 @@ -702,7 +702,16 @@ See the [configuration page](configuration.html) for information on Spark config
     Set the maximum number GPU resources to acquire for this job. Note that executors will still launch when no GPU resources are found
     since this configuration is just an upper limit and not a guaranteed amount.
   </td>
-  </tr>
+</tr>
+<tr>
+  <td><code>spark.mesos.disk</code></td>
+  <td><code>0</code></td>
 
 Review comment:
   Well, as you can see in https://github.com/criteo-forks/mesos/blob/3de5efba936c8b7bd1bf88c2fd05006a93271b73/src/common/http.cpp#L725,  the Mesos API returns a default value of 0 if no disk is provided.
   
   So as far as I'm concerned it should be ok but since you asked let me do the fix to avoid providing any disk in the TaskInfo if not specified in the conf. That way we'll be sure that Spark remains compatible if the behavior changes on Mesos side (i.e., if it becomes "no value is not equivalent to 0").

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org