You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@parquet.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2023/02/03 18:35:00 UTC

[jira] [Commented] (PARQUET-831) Corrupt Parquet Files

    [ https://issues.apache.org/jira/browse/PARQUET-831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17683990#comment-17683990 ] 

ASF GitHub Bot commented on PARQUET-831:
----------------------------------------

jianchun opened a new pull request, #1022:
URL: https://github.com/apache/parquet-mr/pull/1022

   When used memSize is 0, ColumnWriterV1 runs into floating point div by 0 issue, resulting in valueCountForNextSizeCheck become about 1G. (floating point div by 0 is Infinity, casting to int becomes MAX_INT.) Then it will keep buffering data and won't emit a page until 1G rows or close().
   
   This potential too much buffering can easily overflow underlying CapacityByteArrayOutputStream, which uses int type as size count and does not check for overflow. When CapacityByteArrayOutputStream size overflows int it becomes negative. Parquet write path does not check negative size and writes asis into V1PageHeader uncompressed_page_size.
   
   This corrupts parquet file, as parquet reader allocates uncompressed_page_size read buffer, and throws
   NegativeArraySizeException when it's negative.
   
   Fixed by applying an upper bound to valueCountForNextSizeCheck, also added overflow check.
   
   This bug exists in 1.10.x and before.
   
   ColumnWriteStoreV2 has a small parenthesis misplacement bug, resulting in unintended casting to float then back to long before division. So it is an integer division instead of intended floating point division. It doesn't cause problem because it applies other min/max constraints. Fixed as well. (This issue also exists in 1.11.x)
   
   Make sure you have checked _all_ steps below.
   
   ### Jira
   
   - [x] My PR addresses the following [Parquet Jira](https://issues.apache.org/jira/browse/PARQUET/) issues and references them in the PR title. For example, "PARQUET-1234: My Parquet PR"
     - https://issues.apache.org/jira/browse/PARQUET-XXX
     - In case you are adding a dependency, check if the license complies with the [ASF 3rd Party License Policy](https://www.apache.org/legal/resolved.html#category-x).
   
   ### Tests
   
   - [x] My PR adds the following unit tests __OR__ does not need testing for this extremely good reason:
   
   ### Commits
   
   - [x] My commits all reference Jira issues in their subject lines. In addition, my commits follow the guidelines from "[How to write a good git commit message](http://chris.beams.io/posts/git-commit/)":
     1. Subject is separated from body by a blank line
     1. Subject is limited to 50 characters (not including Jira issue reference)
     1. Subject does not end with a period
     1. Subject uses the imperative mood ("add", not "adding")
     1. Body wraps at 72 characters
     1. Body explains "what" and "why", not "how"
   
   ### Documentation
   
   - [x] In case of new functionality, my PR adds documentation that describes how to use it.
     - All the public functions and the classes in the PR contain Javadoc that explain what it does
   




> Corrupt Parquet Files
> ---------------------
>
>                 Key: PARQUET-831
>                 URL: https://issues.apache.org/jira/browse/PARQUET-831
>             Project: Parquet
>          Issue Type: Bug
>          Components: parquet-mr
>    Affects Versions: 1.7.0
>         Environment: HDP-2.5.3.0 Spark-2.0.2
>            Reporter: Steve Severance
>            Priority: Major
>
> I am getting corrupt parquet files as the result of a spark job. The write job completes with no errors but when I read the data again I get the following error:
> org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in block -1 in file hdfs://MYPATH/part-r-00004-b5c93a19-2f75-4c04-b798-de9cb463f02f.gz.parquet
> at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:228)
> at org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:201)
> at org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:91)
> at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:128)
> at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:91)
> at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.agg_doAggregateWithoutKey$(Unknown Source)
> at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
> at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
> at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
> at org.apache.spark.scheduler.Task.run(Task.scala:86)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NegativeArraySizeException
> at org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:755)
> at org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:494)
> at org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:127)
> at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:208) 
> The job that generates this data partitions and sorts the data in a particular way to achieve better compression. If I don't partition and sort I have not been able to reproduce its behavior. It also only has this behavior on say 25% of the data. Most of the time simply rerunning the write job would cause the read error to go away but I have now run across cases where that was not the case. I am happy to give what data I can, or work with someone to run this down.
> I know this is a sub-optimal report, but I have not been able to randomly generate data to reproduce this issue. The data that trips this bug is typically 5GB+ post compression files.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)