You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Yin Huai (JIRA)" <ji...@apache.org> on 2015/05/28 01:44:17 UTC
[jira] [Updated] (SPARK-4629) Spark SQL uses Hadoop Configuration
in a thread-unsafe manner when writing Parquet files
[ https://issues.apache.org/jira/browse/SPARK-4629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Yin Huai updated SPARK-4629:
----------------------------
Target Version/s: (was: 1.4.0)
> Spark SQL uses Hadoop Configuration in a thread-unsafe manner when writing Parquet files
> ----------------------------------------------------------------------------------------
>
> Key: SPARK-4629
> URL: https://issues.apache.org/jira/browse/SPARK-4629
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 1.1.0
> Reporter: Michael Allman
> Assignee: Cheng Lian
> Priority: Critical
>
> The method {{ParquetRelation.createEmpty}} mutates its given Hadoop {{Configuration}} instance to set the Parquet writer compression level (cf. https://github.com/apache/spark/blob/v1.1.0/sql/core/src/main/scala/org/apache/spark/sql/parquet/ParquetRelation.scala#L149). This can lead to a {{ConcurrentModificationException}} when running concurrent jobs sharing a single {{SparkContext}} which involve saving Parquet files.
> Our "fix" was to simply remove the line in question and set the compression level in the hadoop configuration before starting our jobs.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org