You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Bahubali Jain <ba...@gmail.com> on 2017/11/10 03:54:36 UTC

Compression during shuffle writes

Hi,
I have compressed data of size 500GB .I am repartitioning this data since
the underlying data is very skewed and is causing a lot of issues for the
downstream jobs.
During repartioning the *shuffles writes* are not getting compressed due to
this I am running into disk space issues.Below is the screen shot which
clearly depicts the issue(Input,shuffle write columns)
I have proactively set below parameters to true, but still it doesnt
compress the intermediate shuffled data

spark.shuffle.compress
spark.shuffle.spill.compress

[image: Inline image 1]

I am using Spark 1.5 (for various unavoidable reasons!!)
Any suggestions would be greatly appreciated.

Thanks,
Baahu