You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by highfei2011 <hi...@outlook.com> on 2020/10/15 07:10:47 UTC

Streaming File Sink cannot generate _SUCCESS tag files

Hi, everyone!
      Currently experiencing a problem with the bucketing policy sink to hdfs using BucketAssigner of Streaming File Sink after consuming Kafka data with FLink -1.11.2, the _SUCCESS tag file is not generated by default.
      I have added the following to the configuration 


val hadoopConf = new Configuration()
hadoopConf.set(FileOutputCommitter.SUCCESSFUL_JOB_OUTPUT_DIR_MARKER, "true")    


But there is still no _SUCCESS file in the output directory, so why not support generating _SUCCESS files?


Thank you.




Best,
Yang

Re: Streaming File Sink cannot generate _SUCCESS tag files

Posted by Jingsong Li <ji...@gmail.com>.
Hi, Yang,

"SUCCESSFUL_JOB_OUTPUT_DIR_MARKER" does not work in StreamingFileSink.

You can take a look to partition commit feature [1],

[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/filesystem.html#partition-commit

Best,
Jingsong Lee

On Thu, Oct 15, 2020 at 3:11 PM highfei2011 <hi...@outlook.com> wrote:

> Hi, everyone!
>       Currently experiencing a problem with the bucketing policy sink to
> hdfs using BucketAssigner of Streaming File Sink after consuming Kafka data
> with FLink -1.11.2, the _SUCCESS tag file is not generated by default.
>       I have added the following to the configuration
>
> val hadoopConf = new Configuration()
> hadoopConf.set(FileOutputCommitter.SUCCESSFUL_JOB_OUTPUT_DIR_MARKER,
> "true")
>
> But there is still no _SUCCESS file in the output directory, so why not
> support generating _SUCCESS files?
>
> Thank you.
>
>
> Best,
> Yang
>


-- 
Best, Jingsong Lee