You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by 张万新 <ke...@gmail.com> on 2018/01/30 12:26:31 UTC

spark.sql.adaptive.enabled has no effect

Hi there,

  As far as I know, when *spark.sql.adaptive.enabled* is set to true, the
number of post shuffle partitions should change with the map output size.
But in my application there is a stage reading 900GB shuffled files only
with 200 partitions (which is the default number of
*spark.sql.shuffle.partitions*), and I verified that the number of  post
shuffle partitions if always the same as the value of
spark.sql.shuffle.partitions.  Additionally I leave the value of
*spark.sql.adaptive**.shuffle.targetPostShuffleInputSize* by default. Is
there any mistake I've made and what's the correct behavior?

Thanks