You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@parquet.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2021/04/19 12:18:00 UTC

[jira] [Commented] (PARQUET-2027) Merging parquet files created in 1.11.1 not possible using 1.12.0

    [ https://issues.apache.org/jira/browse/PARQUET-2027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17324984#comment-17324984 ] 

ASF GitHub Bot commented on PARQUET-2027:
-----------------------------------------

gszadovszky opened a new pull request #896:
URL: https://github.com/apache/parquet-mr/pull/896


   Make sure you have checked _all_ steps below.
   
   ### Jira
   
   - [ ] My PR addresses the following [Parquet Jira](https://issues.apache.org/jira/browse/PARQUET/) issues and references them in the PR title. For example, "PARQUET-1234: My Parquet PR"
     - https://issues.apache.org/jira/browse/PARQUET-XXX
     - In case you are adding a dependency, check if the license complies with the [ASF 3rd Party License Policy](https://www.apache.org/legal/resolved.html#category-x).
   
   ### Tests
   
   - [ ] My PR adds the following unit tests __OR__ does not need testing for this extremely good reason:
   
   ### Commits
   
   - [ ] My commits all reference Jira issues in their subject lines. In addition, my commits follow the guidelines from "[How to write a good git commit message](http://chris.beams.io/posts/git-commit/)":
     1. Subject is separated from body by a blank line
     1. Subject is limited to 50 characters (not including Jira issue reference)
     1. Subject does not end with a period
     1. Subject uses the imperative mood ("add", not "adding")
     1. Body wraps at 72 characters
     1. Body explains "what" and "why", not "how"
   
   ### Documentation
   
   - [ ] In case of new functionality, my PR adds documentation that describes how to use it.
     - All the public functions and the classes in the PR contain Javadoc that explain what it does
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


> Merging parquet files created in 1.11.1 not possible using 1.12.0 
> ------------------------------------------------------------------
>
>                 Key: PARQUET-2027
>                 URL: https://issues.apache.org/jira/browse/PARQUET-2027
>             Project: Parquet
>          Issue Type: Bug
>          Components: parquet-mr
>    Affects Versions: 1.12.0
>            Reporter: Matthew M
>            Assignee: Gabor Szadovszky
>            Priority: Major
>
> I have parquet files created using 1.11.1. In the process I join two files (with the same schema) into a one output file. I create Hadoop writer:
> {code:scala}
> val hadoopWriter = new ParquetFileWriter(
>       HadoopOutputFile.fromPath(
>         new Path(outputPath.toString),
>         new Configuration()
>       ), outputSchema, Mode.OVERWRITE,
>       8 * 1024 * 1024,
>       2097152,
>       DEFAULT_COLUMN_INDEX_TRUNCATE_LENGTH,
>       DEFAULT_STATISTICS_TRUNCATE_LENGTH,
>       DEFAULT_PAGE_WRITE_CHECKSUM_ENABLED
>     )
>     hadoopWriter.start()
> {code}
> and try to append one file into another:
> {code:scala}
> hadoopWriter.appendFile(HadoopInputFile.fromPath(new Path(file), new Configuration()))
> {code}
> Everything works on 1.11.1. But when I've switched to 1.12.0 it fails with that error:
> {code:scala}
> STDERR: Exception in thread "main" java.io.IOException: can not read class org.apache.parquet.format.PageHeader: Required field 'uncompressed_page_size' was not found in serialized data! Struct: org.apache.parquet.format.PageHeader$PageHeaderStandardScheme@b91d8c4
>  at org.apache.parquet.format.Util.read(Util.java:365)
>  at org.apache.parquet.format.Util.readPageHeader(Util.java:132)
>  at org.apache.parquet.format.Util.readPageHeader(Util.java:127)
>  at org.apache.parquet.hadoop.Offsets.readDictionaryPageSize(Offsets.java:75)
>  at org.apache.parquet.hadoop.Offsets.getOffsets(Offsets.java:58)
>  at org.apache.parquet.hadoop.ParquetFileWriter.appendRowGroup(ParquetFileWriter.java:998)
>  at org.apache.parquet.hadoop.ParquetFileWriter.appendRowGroups(ParquetFileWriter.java:918)
>  at org.apache.parquet.hadoop.ParquetFileReader.appendTo(ParquetFileReader.java:888)
>  at org.apache.parquet.hadoop.ParquetFileWriter.appendFile(ParquetFileWriter.java:895)
>  at [...]
> Caused by: shaded.parquet.org.apache.thrift.protocol.TProtocolException: Required field 'uncompressed_page_size' was not found in serialized data! Struct: org.apache.parquet.format.PageHeader$PageHeaderStandardScheme@b91d8c4
>  at org.apache.parquet.format.PageHeader$PageHeaderStandardScheme.read(PageHeader.java:1108)
>  at org.apache.parquet.format.PageHeader$PageHeaderStandardScheme.read(PageHeader.java:1019)
>  at org.apache.parquet.format.PageHeader.read(PageHeader.java:896)
>  at org.apache.parquet.format.Util.read(Util.java:362)
>  ... 14 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)