You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2022/06/24 13:35:16 UTC

[GitHub] [hudi] JerryYue-M commented on issue #5867: The file being written is in an invalid state. Probably caused by an error thrown previously. Current state: COLUMN

JerryYue-M commented on issue #5867:
URL: https://github.com/apache/hudi/issues/5867#issuecomment-1165582325

   @danny0405 @codope 
   With Hudi release-0.11.0 version. this error appears frequently in the compact task so it can make compact fail.
   I found that. firstly it may appear ` RemoteException: File does not exist` error that can cause mergeHandle close,before close it flush some records.at finally.it occur follow error:
   
   2022-06-24 21:22:55,019 ERROR org.apache.hudi.io.HoodieMergeHandle                         [] - Error writing record  HoodieRecord{key=HoodieKey { recordKey=xxx ea125773f partitionPath=2022-06-21/18}, currentLocation='null', newLocation='null'}
   java.io.IOException: The file being written is in an invalid state. Probably caused by an error thrown previously. Current state: COLUMN
   	at org.apache.hudi.org.apache.parquet.hadoop.ParquetFileWriter$STATE.error(ParquetFileWriter.java:217) ~[blob_p-295f7415f20d1fe87ffb9658937af184c87dc096-45deddc0573ab868da621d786b6f266a:0.11.0]
   	at org.apache.hudi.org.apache.parquet.hadoop.ParquetFileWriter$STATE.startBlock(ParquetFileWriter.java:209) ~[blob_p-295f7415f20d1fe87ffb9658937af184c87dc096-45deddc0573ab868da621d786b6f266a:0.11.0]
   	at org.apache.hudi.org.apache.parquet.hadoop.ParquetFileWriter.startBlock(ParquetFileWriter.java:407) ~[blob_p-295f7415f20d1fe87ffb9658937af184c87dc096-45deddc0573ab868da621d786b6f266a:0.11.0]
   	at org.apache.hudi.org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:184) ~[blob_p-295f7415f20d1fe87ffb9658937af184c87dc096-45deddc0573ab868da621d786b6f266a:0.11.0]
   	at org.apache.hudi.org.apache.parquet.hadoop.InternalParquetRecordWriter.checkBlockSizeReached(InternalParquetRecordWriter.java:158) ~[blob_p-295f7415f20d1fe87ffb9658937af184c87dc096-45deddc0573ab868da621d786b6f266a:0.11.0]
   	at org.apache.hudi.org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:140) ~[blob_p-295f7415f20d1fe87ffb9658937af184c87dc096-45deddc0573ab868da621d786b6f266a:0.11.0]
   	at org.apache.hudi.org.apache.parquet.hadoop.ParquetWriter.write(ParquetWriter.java:310) ~[blob_p-295f7415f20d1fe87ffb9658937af184c87dc096-45deddc0573ab868da621d786b6f266a:0.11.0]
   	at org.apache.hudi.io.storage.HoodieParquetWriter.writeAvro(HoodieParquetWriter.java:104) ~[blob_p-295f7415f20d1fe87ffb9658937af184c87dc096-45deddc0573ab868da621d786b6f266a:0.11.0]
   	at org.apache.hudi.io.HoodieMergeHandle.writeToFile(HoodieMergeHandle.java:367) ~[blob_p-295f7415f20d1fe87ffb9658937af184c87dc096-45deddc0573ab868da621d786b6f266a:0.11.0]
   	at org.apache.hudi.io.HoodieMergeHandle.writeRecord(HoodieMergeHandle.java:296) ~[blob_p-295f7415f20d1fe87ffb9658937af184c87dc096-45deddc0573ab868da621d786b6f266a:0.11.0]
   	at org.apache.hudi.io.HoodieMergeHandle.writeInsertRecord(HoodieMergeHandle.java:277) ~[blob_p-295f7415f20d1fe87ffb9658937af184c87dc096-45deddc0573ab868da621d786b6f266a:0.11.0]
   	at org.apache.hudi.io.HoodieMergeHandle.writeIncomingRecords(HoodieMergeHandle.java:380) ~[blob_p-295f7415f20d1fe87ffb9658937af184c87dc096-45deddc0573ab868da621d786b6f266a:0.11.0]
   	at org.apache.hudi.io.HoodieMergeHandle.close(HoodieMergeHandle.java:388) ~[blob_p-295f7415f20d1fe87ffb9658937af184c87dc096-45deddc0573ab868da621d786b6f266a:0.11.0]
   	at org.apache.hudi.table.action.commit.FlinkMergeHelper.runMerge(FlinkMergeHelper.java:108) ~[blob_p-295f7415f20d1fe87ffb9658937af184c87dc096-45deddc0573ab868da621d786b6f266a:0.11.0]
   	at org.apache.hudi.table.HoodieFlinkCopyOnWriteTable.handleUpdateInternal(HoodieFlinkCopyOnWriteTable.java:379) ~[blob_p-295f7415f20d1fe87ffb9658937af184c87dc096-45deddc0573ab868da621d786b6f266a:0.11.0]
   	at org.apache.hudi.table.HoodieFlinkCopyOnWriteTable.handleUpdate(HoodieFlinkCopyOnWriteTable.java:370) ~[blob_p-295f7415f20d1fe87ffb9658937af184c87dc096-45deddc0573ab868da621d786b6f266a:0.11.0]
   	at org.apache.hudi.table.action.compact.HoodieCompactor.compact(HoodieCompactor.java:227) ~
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org