You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2019/05/21 04:22:05 UTC
[jira] [Updated] (SPARK-17457) Spark SQL shows poor performance for
group by and sort by on multiple columns
[ https://issues.apache.org/jira/browse/SPARK-17457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hyukjin Kwon updated SPARK-17457:
---------------------------------
Labels: bulk-closed (was: )
> Spark SQL shows poor performance for group by and sort by on multiple columns
> ------------------------------------------------------------------------------
>
> Key: SPARK-17457
> URL: https://issues.apache.org/jira/browse/SPARK-17457
> Project: Spark
> Issue Type: Improvement
> Affects Versions: 1.4.0
> Reporter: Sabyasachi Nayak
> Priority: Major
> Labels: bulk-closed
>
> In one of the use case when we are running one hive query with Tez it is taking 45 mnts.But the same query when I am running in Spark SQL using hivecontext it is taking more than 2 hours.This query has no joins only group by and sort by on multiple columns.
> spark-submit --class DataLoadingSpark --master yarn --deploy-mode client --num-executors 60 --executor-memory 16G --driver-memory 4G --executor-cores 5 --conf spark.yarn.executor.memoryOverhead=2048 --conf spark.shuffle.consolidateFiles=true --conf spark.shuffle.memoryFraction=0.5 --conf spark.storage.memoryFraction=.1 --conf spark.io.compression.codec=lzf --conf spark.driver.extraJavaOptions="-XX:MaxPermSize=1024m -XX:PermSize=512m -Dhdp.version=2.3.2.0-2950" --conf spark.shuffle.blockTransferService=nio DataLoadingSpark.jar --inputFile basket_txn.
> Spark UI shows
> Input is 500+ GB and Shuffle write is also 500+GB
> Spark version - 1.4.0
> HDP 2.3.2.0-2950
> 50 node cluster 1100 Vcores
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org