You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Deenar Toraskar <de...@gmail.com> on 2016/01/13 16:09:46 UTC

distributeBy using advantage of HDFS or RDD partitioning

Hi

I have data in HDFS partitioned by a logical key and would like to preserve
the partitioning when creating a dataframe for the same. Is it possible to
create a dataframe that preserves partitioning from HDFS or the underlying
RDD?

Regards
Deenar

Re: distributeBy using advantage of HDFS or RDD partitioning

Posted by Simon Elliston Ball <si...@simonellistonball.com>.
If you load data using ORC or parquet, the RDD will have a partition per file, so in fact your data frame will not directly match the partitioning of the table. 

If you want to process by and guarantee preserving partitioning then mapPartition etc will be useful. 

Note that if you perform any DataFrame operations which shuffle, you will end up implicitly re-partitioning to spark.sql.shuffle.partitions (default 200).

Simon

> On 13 Jan 2016, at 10:09, Deenar Toraskar <de...@gmail.com> wrote:
> 
> Hi
> 
> I have data in HDFS partitioned by a logical key and would like to preserve the partitioning when creating a dataframe for the same. Is it possible to create a dataframe that preserves partitioning from HDFS or the underlying RDD?
> 
> Regards
> Deenar


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org