You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by ahaider3 <ah...@hawk.iit.edu> on 2015/10/19 20:13:45 UTC

Storing Compressed data in HDFS into Spark

Hi,
A lot of the data I have in HDFS is compressed. I noticed when I load this
data into spark and cache it, Spark unrolls the data like normal but stores
the data uncompressed in memory. For example, suppose /data/ is an RDD with
compressed partitions on HDFS. I then cache the data. When I call
/data.count()/, the data is rightly decompressed since it needs to find the
value of /.count()/. But, the data that is cached is also decompressed. Can
a partition be compressed in spark? I know spark allows for data to be
compressed, after serialization. But what if, I only want the partitions
compressed. 



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Storing-Compressed-data-in-HDFS-into-Spark-tp25123.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Storing Compressed data in HDFS into Spark

Posted by Adnan Haider <ah...@hawk.iit.edu>.
I believe spark.rdd.compress requires the data to be serialized. In my
case, I have data already compressed which becomes decompressed as I try to
cache it. I believe even when I set spark.rdd.compress to *true, *Spark
will still decompress the data and then serialize it and then compress the
serialized data.

Although Parquet is an option, I believe it will only make sense to use it
when running Spark SQL. However, if I am using graphx or mllib will it
help?

Thanks, Adnan Haider
B.S Candidate, Computer Science
Illinois Institute of Technology

On Thu, Oct 22, 2015 at 7:15 AM, Igor Berman <ig...@gmail.com> wrote:

> check spark.rdd.compress
>
> On 19 October 2015 at 21:13, ahaider3 <ah...@hawk.iit.edu> wrote:
>
>> Hi,
>> A lot of the data I have in HDFS is compressed. I noticed when I load this
>> data into spark and cache it, Spark unrolls the data like normal but
>> stores
>> the data uncompressed in memory. For example, suppose /data/ is an RDD
>> with
>> compressed partitions on HDFS. I then cache the data. When I call
>> /data.count()/, the data is rightly decompressed since it needs to find
>> the
>> value of /.count()/. But, the data that is cached is also decompressed.
>> Can
>> a partition be compressed in spark? I know spark allows for data to be
>> compressed, after serialization. But what if, I only want the partitions
>> compressed.
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/Storing-Compressed-data-in-HDFS-into-Spark-tp25123.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
>> For additional commands, e-mail: user-help@spark.apache.org
>>
>>
>

Re: Storing Compressed data in HDFS into Spark

Posted by Igor Berman <ig...@gmail.com>.
check spark.rdd.compress

On 19 October 2015 at 21:13, ahaider3 <ah...@hawk.iit.edu> wrote:

> Hi,
> A lot of the data I have in HDFS is compressed. I noticed when I load this
> data into spark and cache it, Spark unrolls the data like normal but stores
> the data uncompressed in memory. For example, suppose /data/ is an RDD with
> compressed partitions on HDFS. I then cache the data. When I call
> /data.count()/, the data is rightly decompressed since it needs to find the
> value of /.count()/. But, the data that is cached is also decompressed. Can
> a partition be compressed in spark? I know spark allows for data to be
> compressed, after serialization. But what if, I only want the partitions
> compressed.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Storing-Compressed-data-in-HDFS-into-Spark-tp25123.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
>

Re: Storing Compressed data in HDFS into Spark

Posted by Akhil Das <ak...@sigmoidanalytics.com>.
Convert your data to parquet, it saves space and time.

Thanks
Best Regards

On Mon, Oct 19, 2015 at 11:43 PM, ahaider3 <ah...@hawk.iit.edu> wrote:

> Hi,
> A lot of the data I have in HDFS is compressed. I noticed when I load this
> data into spark and cache it, Spark unrolls the data like normal but stores
> the data uncompressed in memory. For example, suppose /data/ is an RDD with
> compressed partitions on HDFS. I then cache the data. When I call
> /data.count()/, the data is rightly decompressed since it needs to find the
> value of /.count()/. But, the data that is cached is also decompressed. Can
> a partition be compressed in spark? I know spark allows for data to be
> compressed, after serialization. But what if, I only want the partitions
> compressed.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Storing-Compressed-data-in-HDFS-into-Spark-tp25123.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
>