You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by windkit <gi...@git.apache.org> on 2017/10/23 07:12:08 UTC
[GitHub] spark pull request #19555: [SPARK-22133][DOCS] Documentation for Mesos Rejec...
GitHub user windkit opened a pull request:
https://github.com/apache/spark/pull/19555
[SPARK-22133][DOCS] Documentation for Mesos Reject Offer Configurations
## What changes were proposed in this pull request?
Documentation about Mesos Reject Offer Configurations
## Related PR
https://github.com/apache/spark/pull/19510 for `spark.mem.max`
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/windkit/spark spark_22133
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/19555.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #19555
----
commit 6c738fea83965a9c2a2448e0e42292d6c034cdf2
Author: Li, YanKit | Wilson | RIT <ya...@rakuten.com>
Date: 2017-10-23T06:55:24Z
[SPARK-22133][DOCS] Documentation for Mesos Reject Offer Configurations
commit 614a4e0a741d96b0e96541d9afb6a72e53cc1d43
Author: Li, YanKit | Wilson | RIT <ya...@rakuten.com>
Date: 2017-10-23T06:59:15Z
[SPARK-22133][DOCS] Mesos Reject Offer Configurations Documentation change for spark.mem.max
----
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark pull request #19555: [SPARK-22133][DOCS] Documentation for Mesos Rejec...
Posted by windkit <gi...@git.apache.org>.
Github user windkit commented on a diff in the pull request:
https://github.com/apache/spark/pull/19555#discussion_r146273016
--- Diff: docs/running-on-mesos.md ---
@@ -613,6 +621,41 @@ See the [configuration page](configuration.html) for information on Spark config
driver disconnects, the master immediately tears down the framework.
</td>
</tr>
+<tr>
+ <td><code>spark.mesos.rejectOfferDuration</code></td>
+ <td><code>120s</code></td>
+ <td>
+ The amount of time that the master will reject offer after declining
--- End diff --
Thanks for the comment, I will update it.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark pull request #19555: [SPARK-22133][DOCS] Documentation for Mesos Rejec...
Posted by ArtRand <gi...@git.apache.org>.
Github user ArtRand commented on a diff in the pull request:
https://github.com/apache/spark/pull/19555#discussion_r146242931
--- Diff: docs/running-on-mesos.md ---
@@ -344,6 +345,13 @@ See the [configuration page](configuration.html) for information on Spark config
</td>
</tr>
<tr>
+ <td><code>spark.mem.max</code></td>
--- End diff --
As above, please add this in the separate PR.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #19555: [SPARK-22133][DOCS] Documentation for Mesos Reject Offer...
Posted by srowen <gi...@git.apache.org>.
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/19555
Merged to master
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #19555: [SPARK-22133][DOCS] Documentation for Mesos Reject Offer...
Posted by skonto <gi...@git.apache.org>.
Github user skonto commented on the issue:
https://github.com/apache/spark/pull/19555
@susanxhuynh could you pls review this?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark pull request #19555: [SPARK-22133][DOCS] Documentation for Mesos Rejec...
Posted by ArtRand <gi...@git.apache.org>.
Github user ArtRand commented on a diff in the pull request:
https://github.com/apache/spark/pull/19555#discussion_r146242882
--- Diff: docs/running-on-mesos.md ---
@@ -196,17 +196,18 @@ configuration variables:
* Executor memory: `spark.executor.memory`
* Executor cores: `spark.executor.cores`
-* Number of executors: `spark.cores.max`/`spark.executor.cores`
+* Number of executors: min(`spark.cores.max`/`spark.executor.cores`,
+`spark.mem.max`/(`spark.executor.memory`+`spark.mesos.executor.memoryOverhead`))
Please see the [Spark Configuration](configuration.html) page for
details and default values.
Executors are brought up eagerly when the application starts, until
-`spark.cores.max` is reached. If you don't set `spark.cores.max`, the
-Spark application will reserve all resources offered to it by Mesos,
-so we of course urge you to set this variable in any sort of
-multi-tenant cluster, including one which runs multiple concurrent
-Spark applications.
+`spark.cores.max` or `spark.mem.max` is reached. If you don't set
+`spark.cores.max` and `spark.mem.max`, the Spark application will
+reserve all resources offered to it by Mesos, so we of course urge
--- End diff --
`reserve` is probably not the correct term to use here. I would use `consume`, as Spark does not actually make resource reservations http://mesos.apache.org/documentation/latest/reservation/
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #19555: [SPARK-22133][DOCS] Documentation for Mesos Reject Offer...
Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19555
**[Test build #3984 has finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3984/testReport)** for PR 19555 at commit [`5f3b35b`](https://github.com/apache/spark/commit/5f3b35b844a06f08b5e63a64e488a1208acd1243).
* This patch passes all tests.
* This patch merges cleanly.
* This patch adds no public classes.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark pull request #19555: [SPARK-22133][DOCS] Documentation for Mesos Rejec...
Posted by ArtRand <gi...@git.apache.org>.
Github user ArtRand commented on a diff in the pull request:
https://github.com/apache/spark/pull/19555#discussion_r146242613
--- Diff: docs/running-on-mesos.md ---
@@ -196,17 +196,18 @@ configuration variables:
* Executor memory: `spark.executor.memory`
* Executor cores: `spark.executor.cores`
-* Number of executors: `spark.cores.max`/`spark.executor.cores`
+* Number of executors: min(`spark.cores.max`/`spark.executor.cores`,
+`spark.mem.max`/(`spark.executor.memory`+`spark.mesos.executor.memoryOverhead`))
Please see the [Spark Configuration](configuration.html) page for
details and default values.
Executors are brought up eagerly when the application starts, until
-`spark.cores.max` is reached. If you don't set `spark.cores.max`, the
-Spark application will reserve all resources offered to it by Mesos,
-so we of course urge you to set this variable in any sort of
-multi-tenant cluster, including one which runs multiple concurrent
-Spark applications.
+`spark.cores.max` or `spark.mem.max` is reached. If you don't set
--- End diff --
Could you please add these changes only in https://github.com/apache/spark/pull/19510/?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #19555: [SPARK-22133][DOCS] Documentation for Mesos Reject Offer...
Posted by ArtRand <gi...@git.apache.org>.
Github user ArtRand commented on the issue:
https://github.com/apache/spark/pull/19555
Hey @srowen can we get a merge on this?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #19555: [SPARK-22133][DOCS] Documentation for Mesos Reject Offer...
Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19555
Can one of the admins verify this patch?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #19555: [SPARK-22133][DOCS] Documentation for Mesos Reject Offer...
Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19555
**[Test build #3984 has started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3984/testReport)** for PR 19555 at commit [`5f3b35b`](https://github.com/apache/spark/commit/5f3b35b844a06f08b5e63a64e488a1208acd1243).
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark pull request #19555: [SPARK-22133][DOCS] Documentation for Mesos Rejec...
Posted by windkit <gi...@git.apache.org>.
Github user windkit commented on a diff in the pull request:
https://github.com/apache/spark/pull/19555#discussion_r146272841
--- Diff: docs/running-on-mesos.md ---
@@ -196,17 +196,18 @@ configuration variables:
* Executor memory: `spark.executor.memory`
* Executor cores: `spark.executor.cores`
-* Number of executors: `spark.cores.max`/`spark.executor.cores`
+* Number of executors: min(`spark.cores.max`/`spark.executor.cores`,
+`spark.mem.max`/(`spark.executor.memory`+`spark.mesos.executor.memoryOverhead`))
Please see the [Spark Configuration](configuration.html) page for
details and default values.
Executors are brought up eagerly when the application starts, until
-`spark.cores.max` is reached. If you don't set `spark.cores.max`, the
-Spark application will reserve all resources offered to it by Mesos,
-so we of course urge you to set this variable in any sort of
-multi-tenant cluster, including one which runs multiple concurrent
-Spark applications.
+`spark.cores.max` or `spark.mem.max` is reached. If you don't set
+`spark.cores.max` and `spark.mem.max`, the Spark application will
+reserve all resources offered to it by Mesos, so we of course urge
--- End diff --
Agree. I will update it later on
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark pull request #19555: [SPARK-22133][DOCS] Documentation for Mesos Rejec...
Posted by windkit <gi...@git.apache.org>.
Github user windkit commented on a diff in the pull request:
https://github.com/apache/spark/pull/19555#discussion_r146272708
--- Diff: docs/running-on-mesos.md ---
@@ -196,17 +196,18 @@ configuration variables:
* Executor memory: `spark.executor.memory`
* Executor cores: `spark.executor.cores`
-* Number of executors: `spark.cores.max`/`spark.executor.cores`
+* Number of executors: min(`spark.cores.max`/`spark.executor.cores`,
+`spark.mem.max`/(`spark.executor.memory`+`spark.mesos.executor.memoryOverhead`))
Please see the [Spark Configuration](configuration.html) page for
details and default values.
Executors are brought up eagerly when the application starts, until
-`spark.cores.max` is reached. If you don't set `spark.cores.max`, the
-Spark application will reserve all resources offered to it by Mesos,
-so we of course urge you to set this variable in any sort of
-multi-tenant cluster, including one which runs multiple concurrent
-Spark applications.
+`spark.cores.max` or `spark.mem.max` is reached. If you don't set
--- End diff --
@ArtRand Sure, I will move the documentation to 19510
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark pull request #19555: [SPARK-22133][DOCS] Documentation for Mesos Rejec...
Posted by ArtRand <gi...@git.apache.org>.
Github user ArtRand commented on a diff in the pull request:
https://github.com/apache/spark/pull/19555#discussion_r146244212
--- Diff: docs/running-on-mesos.md ---
@@ -613,6 +621,41 @@ See the [configuration page](configuration.html) for information on Spark config
driver disconnects, the master immediately tears down the framework.
</td>
</tr>
+<tr>
+ <td><code>spark.mesos.rejectOfferDuration</code></td>
+ <td><code>120s</code></td>
+ <td>
+ The amount of time that the master will reject offer after declining
--- End diff --
This doesn't sound correct. The mesos.proto (https://github.com/apache/mesos/blob/master/include/mesos/mesos.proto#L2310) states:
```
Time to consider unused resources refused. Note that all unused
resources will be considered refused and use the default value
(below) regardless of whether Filters was passed to
SchedulerDriver::launchTasks. You MUST pass Filters with this
field set to change this behavior (i.e., get another offer which
includes unused resources sooner or later than the default).
```
some simple word-smithing or a link should make it clearer.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark pull request #19555: [SPARK-22133][DOCS] Documentation for Mesos Rejec...
Posted by asfgit <gi...@git.apache.org>.
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/19555
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #19555: [SPARK-22133][DOCS] Documentation for Mesos Reject Offer...
Posted by ArtRand <gi...@git.apache.org>.
Github user ArtRand commented on the issue:
https://github.com/apache/spark/pull/19555
LGTM
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org