You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2015/10/08 22:07:26 UTC
[jira] [Reopened] (SPARK-11005) Spark 1.5 Shuffle performance -
(sort-based shuffle)
[ https://issues.apache.org/jira/browse/SPARK-11005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Sean Owen reopened SPARK-11005:
-------------------------------
[~vnayak053] "Fixed" isn't an appropriate resolution since there was no action take, or necessarily a problem.
> Spark 1.5 Shuffle performance - (sort-based shuffle)
> ----------------------------------------------------
>
> Key: SPARK-11005
> URL: https://issues.apache.org/jira/browse/SPARK-11005
> Project: Spark
> Issue Type: Question
> Components: Shuffle, SQL
> Affects Versions: 1.5.0
> Environment: 6 node cluster with 1 master and 5 worker nodes.
> Memory > 100 GB each
> Cores = 72 each
> Input data ~94 GB
> Reporter: Sandeep Pal
>
> In case of terasort by Spark SQL with 20 total cores(4 cores/ executor), the performance of the map tasks is 14 minutes (around 26s-30s each) where as if I increase the number of cores to 60(12 cores /executor), the performance of map degrades to 30 minutes ( ~2.3 minutes per task). I believe the map tasks are independent of each other in the shuffle.
> Each map task has 128 MB input (HDFS block size) in both of the above cases. So, what makes the performance degradation with increasing number of cores.
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org