You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Michael Armbrust (JIRA)" <ji...@apache.org> on 2015/05/08 01:24:00 UTC
[jira] [Resolved] (SPARK-7277) property mapred.reduce.task replaced
by spark.sql.shuffle.partitions
[ https://issues.apache.org/jira/browse/SPARK-7277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Michael Armbrust resolved SPARK-7277.
-------------------------------------
Resolution: Fixed
Fix Version/s: 1.4.0
Issue resolved by pull request 5811
[https://github.com/apache/spark/pull/5811]
> property mapred.reduce.task replaced by spark.sql.shuffle.partitions
> --------------------------------------------------------------------
>
> Key: SPARK-7277
> URL: https://issues.apache.org/jira/browse/SPARK-7277
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 1.3.1
> Reporter: Sebastian
> Fix For: 1.4.0
>
>
> When I use "SET mapred.reduce.task" I get the warning "SetCommand: Property mapred.reduce.tasks is deprecated, automatically converted to spark.sql.shuffle.partitions instead."
> It's true that mapred.reduce.task is deprecated but this replacement causes serious trouble:
> Setting mapred.reduce.task to -1 (negative one) is valid and causes hadoop/hive to automatically determine the required number of reducers.
> Setting spark.sql.shuffle.partitions to negative can cause spark to produce incorrect results.
> In my system (spark-sql 1.3.1 running on a single machine in "local" mode), with this setting, any outer join produces no output (whereas an inner join does)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org