You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "zhangyunyun (Jira)" <ji...@apache.org> on 2021/01/18 02:01:00 UTC

[jira] [Updated] (FLINK-21003) Flink add Sink to AliyunOSS doesn't work

     [ https://issues.apache.org/jira/browse/FLINK-21003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

zhangyunyun updated FLINK-21003:
--------------------------------
    Description: 
When I add a sink to OSS, use the code below:

 
{code:java}
String path = "oss://<bucket>/<dir>";
StreamingFileSink streamingFileSink = StreamingFileSink
    .forRowFormat(new Path(path), new SimpleStringEncoder<String>("UTF-8"))
    .withRollingPolicy(
        DefaultRollingPolicy.builder()
            .withRolloverInterval(TimeUnit.MINUTES.toMillis(5))
            .withInactivityInterval(TimeUnit.MINUTES.toMillis(1))
            .withMaxPartSize(1024 * 1024 * 10)
            .build()
    ).build();

strStream.addSink(streamingFileSink);{code}
 

It occus an error:

 
{code:java}
Recoverable writers on Hadoop are only supported for HDF
{code}
Is there something I made a mistake?

I want to use Aliyun OSS to store the stream data split to different files. 

The Flink official document's example is use below:
{code:java}
// Write to OSS bucket
stream.writeAsText("oss://<your-bucket>/<object-name>")
{code}
How to use this to split to different files by the data's attributes?

 

Thanks!

 

 

 

 

 

 

 

 

 

 

 

  was:
When I add a sink to OSS, use the code below:

 
{code:java}
//代码占位符
String path = "oss://<bucket>/<dir>";
StreamingFileSink streamingFileSink = StreamingFileSink
    .forRowFormat(new Path(path), new SimpleStringEncoder<String>("UTF-8"))
    .withRollingPolicy(
        DefaultRollingPolicy.builder()
            .withRolloverInterval(TimeUnit.MINUTES.toMillis(5))
            .withInactivityInterval(TimeUnit.MINUTES.toMillis(1))
            .withMaxPartSize(1024 * 1024 * 10)
            .build()
    ).build();

strStream.addSink(streamingFileSink);{code}
 

It occus an error:

 
{code:java}
//代码占位符
Recoverable writers on Hadoop are only supported for HDF
{code}
Is there something I made a mistake?

I want to use Aliyun OSS to store the stream data split to different files. 

The Flink official document's example is use below:
{code:java}
//代码占位符
// Write to OSS bucket
stream.writeAsText("oss://<your-bucket>/<object-name>")
{code}
How to use this to split to different files by the data's attributes?

 

Thanks!

 

 

 

 

 

 

 

 

 

 

 


> Flink add Sink to AliyunOSS doesn't work
> ----------------------------------------
>
>                 Key: FLINK-21003
>                 URL: https://issues.apache.org/jira/browse/FLINK-21003
>             Project: Flink
>          Issue Type: Bug
>          Components: Connectors / FileSystem
>    Affects Versions: 1.11.0
>            Reporter: zhangyunyun
>            Priority: Major
>
> When I add a sink to OSS, use the code below:
>  
> {code:java}
> String path = "oss://<bucket>/<dir>";
> StreamingFileSink streamingFileSink = StreamingFileSink
>     .forRowFormat(new Path(path), new SimpleStringEncoder<String>("UTF-8"))
>     .withRollingPolicy(
>         DefaultRollingPolicy.builder()
>             .withRolloverInterval(TimeUnit.MINUTES.toMillis(5))
>             .withInactivityInterval(TimeUnit.MINUTES.toMillis(1))
>             .withMaxPartSize(1024 * 1024 * 10)
>             .build()
>     ).build();
> strStream.addSink(streamingFileSink);{code}
>  
> It occus an error:
>  
> {code:java}
> Recoverable writers on Hadoop are only supported for HDF
> {code}
> Is there something I made a mistake?
> I want to use Aliyun OSS to store the stream data split to different files. 
> The Flink official document's example is use below:
> {code:java}
> // Write to OSS bucket
> stream.writeAsText("oss://<your-bucket>/<object-name>")
> {code}
> How to use this to split to different files by the data's attributes?
>  
> Thanks!
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)