You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Pranav Rao (JIRA)" <ji...@apache.org> on 2018/02/15 17:49:00 UTC

[jira] [Updated] (SPARK-23442) Reading from partitioned and bucketed table uses only bucketSpec.numBuckets partitions in all cases

     [ https://issues.apache.org/jira/browse/SPARK-23442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Pranav Rao updated SPARK-23442:
-------------------------------
    Environment:     (was: {{{{spark.sql("SET spark.default.parallelism=1000") }}}}

{{spark.sql("set spark.sql.shuffle.partitions=500") }}

{{spark.sql("set spark.sql.files.maxPartitionBytes=134217728")}}

{{-----}}

{{$ hdfs getconf -confKey mapreduce.input.fileinputformat.split.minsize}}
0 

$ hdfs getconf -confKey dfs.blocksize
 134217728 

$ hdfs getconf -confKey mapreduce.job.maps
 32)

> Reading from partitioned and bucketed table uses only bucketSpec.numBuckets partitions in all cases
> ---------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-23442
>                 URL: https://issues.apache.org/jira/browse/SPARK-23442
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core, SQL
>    Affects Versions: 2.2.1
>            Reporter: Pranav Rao
>            Priority: Major
>
> Through the DataFrameWriter[T] interface I have created a external HIVE table with 5000 (horizontal) partitions and 50 buckets in each partition. Overall the dataset is 600GB and the provider is Parquet.
> Now this works great when joining with a similarly bucketed dataset - it's able to avoid a shuffle. 
> But any action on this Dataframe(from _spark.table("tablename")_), works with only 50 RDD partitions. This is happening because of [createBucketedReadRDD|https://github.com/apachttps:/github.com/apache/spark/blob/branch-2.3/sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.she/spark/blob/branch-2.3/sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.sc]. So the 600GB dataset is only read through 50 tasks, which makes this partitioning + bucketing scheme not useful at all.
> I cannot expose the base directory of the parquet folder for reading the dataset, because the partition locations don't follow a (basePath + partSpec) format.
> Meanwhile, are there workarounds to use higher parallelism while reading such a table? Let me know if we



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org