You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@mrql.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2015/05/24 23:21:17 UTC
[jira] [Commented] (MRQL-73) Set the max number of tasks in Spark
mode
[ https://issues.apache.org/jira/browse/MRQL-73?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14557849#comment-14557849 ]
ASF GitHub Bot commented on MRQL-73:
------------------------------------
GitHub user fegaras opened a pull request:
https://github.com/apache/incubator-mrql/pull/5
[MRQL-73] Set the max number of tasks in Spark mode
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/fegaras/incubator-mrql spark-config
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/incubator-mrql/pull/5.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #5
----
commit 5eb81d992cec29084d2a97686f95b08cdc727809
Author: fegaras <fe...@cse.uta.edu>
Date: 2015-05-24T21:16:16Z
[MRQL-73] Set the max number of tasks in Spark mode
----
> Set the max number of tasks in Spark mode
> -----------------------------------------
>
> Key: MRQL-73
> URL: https://issues.apache.org/jira/browse/MRQL-73
> Project: MRQL
> Issue Type: Bug
> Components: Run-Time/Spark
> Affects Versions: 0.9.6
> Reporter: Leonidas Fegaras
> Assignee: Leonidas Fegaras
> Priority: Critical
>
> The number of worker nodes in Spark distributed mode, which are specified by the MRQL -nodes parameter, must set the parameters SPARK_WORKER_INSTANCES (called SPARK_EXECUTOR_INSTANCES in Spark 1.3.*) and SPARK_WORKER_CORES; otherwise, Spark will always use all the available cores in the cluster.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)