You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Xiao Li (JIRA)" <ji...@apache.org> on 2017/07/10 23:41:00 UTC
[jira] [Updated] (SPARK-20920) ForkJoinPool pools are leaked when
writing hive tables with many partitions
[ https://issues.apache.org/jira/browse/SPARK-20920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Xiao Li updated SPARK-20920:
----------------------------
Fix Version/s: (was: 2.3.0)
> ForkJoinPool pools are leaked when writing hive tables with many partitions
> ---------------------------------------------------------------------------
>
> Key: SPARK-20920
> URL: https://issues.apache.org/jira/browse/SPARK-20920
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 2.1.1
> Reporter: Rares Mirica
> Assignee: Sean Owen
> Fix For: 2.1.2, 2.2.0
>
>
> This bug is loosely related to SPARK-17396
> In this case it happens when writing to a hive table with many, many, partitions (my table is partitioned by hour and stores data it gets from kafka in a spark streaming application):
> df.repartition()
> .write
> .format("orc")
> .option("path", s"$tablesStoragePath/$tableName")
> .mode(SaveMode.Append)
> .partitionBy("dt", "hh")
> .saveAsTable(tableName)
> As this table grows beyond a certain size, ForkJoinPool pools start leaking. Upon examination (with a debugger) I found that the caller is AlterTableRecoverPartitionsCommand and the problem happens when `evalTaskSupport` is used (line 555). I have tried setting a very large threshold via `spark.rdd.parallelListingThreshold` and the problem went away.
> My assumption is that the problem happens in this case and not in the one in SPARK-17396 due to the fact that AlterTableRecoverPartitionsCommand is a case class while UnionRDD is an object so multiple instances are not possible, therefore no leak.
> Regards,
> Rares
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org