You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Michael Allman (JIRA)" <ji...@apache.org> on 2014/11/26 22:21:12 UTC

[jira] [Created] (SPARK-4629) Spark SQL uses Hadoop Configuration in a thread-unsafe manner when writing Parquet files

Michael Allman created SPARK-4629:
-------------------------------------

             Summary: Spark SQL uses Hadoop Configuration in a thread-unsafe manner when writing Parquet files
                 Key: SPARK-4629
                 URL: https://issues.apache.org/jira/browse/SPARK-4629
             Project: Spark
          Issue Type: Bug
          Components: SQL
    Affects Versions: 1.1.0
            Reporter: Michael Allman


The method {{ParquetRelation.createEmpty}} mutates its given Hadoop {{Configuration}} instance to set the Parquet writer compression level (cf. https://github.com/apache/spark/blob/v1.1.0/sql/core/src/main/scala/org/apache/spark/sql/parquet/ParquetRelation.scala#L149). This can lead to a {{ConcurrentModificationException}} when running concurrent jobs sharing a single {{SparkContext}} which involve saving Parquet files.

Our "fix" was to simply remove the line in question and set the compression level in the hadoop configuration before starting our jobs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org