You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Jingsong Lee (Jira)" <ji...@apache.org> on 2020/08/14 01:56:00 UTC
[jira] [Comment Edited] (FLINK-18659) FileNotFoundException when
writing Hive orc tables
[ https://issues.apache.org/jira/browse/FLINK-18659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17177417#comment-17177417 ]
Jingsong Lee edited comment on FLINK-18659 at 8/14/20, 1:55 AM:
----------------------------------------------------------------
master: 6a7b464c708c64f359b731ae5cee97ebb6c62d07
release-1.11: b728f22e9f95a0ea8952aa234150718d796e72ef
was (Author: lzljs3620320):
master: 6a7b464c708c64f359b731ae5cee97ebb6c62d07
release-1.11: [https://github.com/apache/flink/pull/13130]
> FileNotFoundException when writing Hive orc tables
> --------------------------------------------------
>
> Key: FLINK-18659
> URL: https://issues.apache.org/jira/browse/FLINK-18659
> Project: Flink
> Issue Type: Bug
> Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
> Affects Versions: 1.11.1
> Reporter: Jingsong Lee
> Assignee: Rui Li
> Priority: Critical
> Labels: pull-request-available
> Fix For: 1.11.2
>
>
> Writing Hive orc tables with Hive 1.1 version, will be:
> {code:java}
> Caused by: java.io.FileNotFoundException: File does not exist: hdfs://xxx/warehouse2/tmp_table/.part-6b51dbc2-e169-43a8-93b2-eb8d2be45054-0-0.inprogress.d77fa76c-4760-4cb6-bb5b-97d70afff000 at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1218) at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1210) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1210) at org.apache.flink.connectors.hive.write.HiveBulkWriterFactory$1.getSize(HiveBulkWriterFactory.java:54) at org.apache.flink.formats.hadoop.bulk.HadoopPathBasedPartFileWriter.getSize(HadoopPathBasedPartFileWriter.java:84) at org.apache.flink.table.filesystem.FileSystemTableSink$TableRollingPolicy.shouldRollOnEvent(FileSystemTableSink.java:451) at org.apache.flink.table.filesystem.FileSystemTableSink$TableRollingPolicy.shouldRollOnEvent(FileSystemTableSink.java:421) at org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.write(Bucket.java:193) at org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.onElement(Buckets.java:282) at org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSinkHelper.onElement(StreamingFileSinkHelper.java:104) at org.apache.flink.table.filesystem.stream.StreamingFileWriter.processElement(StreamingFileWriter.java:118) at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:717) at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:692) at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:672) at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52) at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)
> {code}
> This maybe due to lazy init in Orc writer. Until first record comes, orc writer not create this file.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)