You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Aljoscha Krettek (JIRA)" <ji...@apache.org> on 2018/03/01 14:50:00 UTC
[jira] [Closed] (FLINK-8814) Control over the extension of part
files created by BucketingSink
[ https://issues.apache.org/jira/browse/FLINK-8814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Aljoscha Krettek closed FLINK-8814.
-----------------------------------
Resolution: Fixed
Fix Version/s: 1.5.0
Implemented on release-1.5 in
06b05cd204bd9a12884ad12805a61005ef40fbe7
Implemented on master in
f152542468b37783932fc2c7725a3a5871b7a701
> Control over the extension of part files created by BucketingSink
> -----------------------------------------------------------------
>
> Key: FLINK-8814
> URL: https://issues.apache.org/jira/browse/FLINK-8814
> Project: Flink
> Issue Type: Improvement
> Components: Streaming Connectors
> Affects Versions: 1.4.0
> Reporter: Jelmer Kuperus
> Priority: Major
> Fix For: 1.5.0
>
>
> BucketingSink creates files with the following pattern
> {noformat}
> partPrefix + "-" + subtaskIndex + "-" + bucketState.partCounter{noformat}
> When using checkpointing you have no control over the extension of the final files generated. This is incovenient when you are for instance writing files in the avro format because
> # [Hue|http://gethue.com/] will not be able to render the files as avro See this [file|https://github.com/cloudera/hue/blob/master/apps/filebrowser/src/filebrowser/views.py#L730]
> # [Spark avro|https://github.com/databricks/spark-avro/] will not be able to read the files unless you set a special property. See [this ticket|https://github.com/databricks/spark-avro/issues/203]
> It would be good if we had the ability to customize the extension of the files created
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)