You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "JIAN WANG (Jira)" <ji...@apache.org> on 2020/07/14 04:41:00 UTC

[jira] [Created] (FLINK-18592) StreamingFileSink fails due to truncating HDFS file failure

JIAN WANG created FLINK-18592:
---------------------------------

             Summary: StreamingFileSink fails due to truncating HDFS file failure
                 Key: FLINK-18592
                 URL: https://issues.apache.org/jira/browse/FLINK-18592
             Project: Flink
          Issue Type: Bug
    Affects Versions: 1.10.1
            Reporter: JIAN WANG


I meet the issue on flink-1.10.1. I use flink on YARN(3.0.0-cdh6.3.2) with StreamingFileSink. 

code part like this:

*public* *static* <IN> StreamingFileSink<IN> build(String dir, BucketAssigner<IN, String> assigner, String prefix)

{                  return StreamingFileSink.forRowFormat(new Path(dir), new SimpleStringEncoder<IN>())                          .withRollingPolicy(DefaultRollingPolicy._builder_()                                                                            .withRolloverInterval(TimeUnit.HOURS.toMillis(2))                                                                             .withInactivityInterval(TimeUnit.MINUTES.toMillis(10))                                                                    .withMaxPartSize(1024L * 1024L * 1024L * 50) // Max 50GB                                                           .build())                                  .withBucketAssigner(assigner)                                                                                                           .withOutputFileConfig(OutputFileConfig._builder_().withPartPrefix(prefix).build())                       .build();     }

 

The error is 

java.io.IOException: Problem while truncating file: hdfs:///business_log/hashtag/2020-06-25/.hashtag-122-37.inprogress.8e65f69c-b5ba-4466-a844-ccc0a5a93de2

Due to this issue, it can not restart from the latest checkpoint and savepoint.

 

Currently, my workaround is that we keep latest 3 checkpoint, and if it fails, I manually restart from penult checkpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)