You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Anusha Buchireddygari (JIRA)" <ji...@apache.org> on 2019/02/05 18:05:00 UTC

[jira] [Created] (SPARK-26828) Coalesce to reduce partitions before writing to hive is not working

Anusha Buchireddygari created SPARK-26828:
---------------------------------------------

             Summary: Coalesce to reduce partitions before writing to hive is not working
                 Key: SPARK-26828
                 URL: https://issues.apache.org/jira/browse/SPARK-26828
             Project: Spark
          Issue Type: Bug
          Components: Spark Core
    Affects Versions: 2.3.0
            Reporter: Anusha Buchireddygari


final_store.coalesce(5).write.mode("overwrite").insertInto("database.tablename",overwrite = True), this statement is not merging partitions. I've set 

.config("spark.default.parallelism", "2000") \
.config("spark.sql.shuffle.partitions", "2000") \

however repartition is working but takes 20-25 minutes to insert.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org