You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "swetha k (JIRA)" <ji...@apache.org> on 2015/12/01 22:36:10 UTC

[jira] [Commented] (SPARK-5968) Parquet warning in spark-shell

    [ https://issues.apache.org/jira/browse/SPARK-5968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15034624#comment-15034624 ] 

swetha k commented on SPARK-5968:
---------------------------------

[~lian cheng]

Following are the dependencies and the versions that I am using. I want to know if using a different version would be of any help to fix this.  I see this error in my Spark Batch Job when I save the Parquet files to hdfs.

        <sparkVersion>1.5.2</sparkVersion>
        <avro.version>1.7.7</avro.version>
        <parquet.version>1.4.3</parquet.version>

        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.10</artifactId>
            <version>${sparkVersion}</version>
            <scope>provided</scope>
        </dependency>


        <dependency>
            <groupId>org.apache.avro</groupId>
            <artifactId>avro</artifactId>
            <version>${avro.version}</version>
        </dependency>

        <dependency>
            <groupId>com.twitter</groupId>
            <artifactId>parquet-avro</artifactId>
            <version>1.6.0rc7</version>
        </dependency>

        <dependency>
            <groupId>com.twitter</groupId>
            <artifactId>parquet-hadoop</artifactId>
            <version>1.6.0rc7</version>
        </dependency>



> Parquet warning in spark-shell
> ------------------------------
>
>                 Key: SPARK-5968
>                 URL: https://issues.apache.org/jira/browse/SPARK-5968
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.3.0
>            Reporter: Michael Armbrust
>            Assignee: Cheng Lian
>            Priority: Critical
>             Fix For: 1.3.0
>
>
> This may happen in the case of schema evolving, namely appending new Parquet data with different but compatible schema to existing Parquet files:
> {code}
> 15/02/23 23:29:24 WARN ParquetOutputCommitter: could not write summary file for rankings
> parquet.io.ParquetEncodingException: file:/Users/matei/workspace/apache-spark/rankings/part-r-00001.parquet invalid: all the files must be contained in the root rankings
> at parquet.hadoop.ParquetFileWriter.mergeFooters(ParquetFileWriter.java:422)
> at parquet.hadoop.ParquetFileWriter.writeMetadataFile(ParquetFileWriter.java:398)
> at parquet.hadoop.ParquetOutputCommitter.commitJob(ParquetOutputCommitter.java:51)
> {code}
> The reason is that the Spark SQL schemas stored in Parquet key-value metadata differ. Parquet doesn't know how to "merge" these opaque user-defined metadata, and just throw an exception and give up writing summary files. Since the Parquet data source in Spark 1.3.0 supports schema merging, it's harmless.  But this is kind of scary for the user.  We should try to suppress this through the logger. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org