You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2019/02/07 07:30:00 UTC

[jira] [Commented] (SPARK-26828) Coalesce to reduce partitions before writing to hive is not working

    [ https://issues.apache.org/jira/browse/SPARK-26828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16762436#comment-16762436 ] 

Hyukjin Kwon commented on SPARK-26828:
--------------------------------------

Can you make a self-reproducer so that people can deduplicate the efforts to investigate this? Also, I wonder if {{spark.default.parallelism}} and {{spark.sql.shuffle.partitions}} matter.

> Coalesce to reduce partitions before writing to hive is not working
> -------------------------------------------------------------------
>
>                 Key: SPARK-26828
>                 URL: https://issues.apache.org/jira/browse/SPARK-26828
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.3.0
>            Reporter: Anusha Buchireddygari
>            Priority: Minor
>
> final_store.coalesce(5).write.mode("overwrite").insertInto("database.tablename",overwrite = True), this statement is not merging partitions. I've set 
> .config("spark.default.parallelism", "2000") \
> .config("spark.sql.shuffle.partitions", "2000") \
> however repartition is working but takes 20-25 minutes to insert.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org