You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by zhangxinyu1 <gi...@git.apache.org> on 2018/05/31 13:21:53 UTC

[GitHub] flink pull request #6108: [FLINK-9367] [Streaming Connectors] Allow to do tr...

GitHub user zhangxinyu1 opened a pull request:

    https://github.com/apache/flink/pull/6108

    [FLINK-9367] [Streaming Connectors] Allow to do truncate() in calss BucketingSink when hadoop version is lower than 2.7

    ## What is the purpose of the change
    
    In the current implementation of class BucketingSink, we cannot use truncate() function if the hadoop version is lower than 2.7. Instead, it use a valid-length file to mark how much data is valid.
    However, users which reads data from HDFS may not or should not know how deal with this valid-length file. 
    Hence, we need a configuration to decide whether use the valid-length file. If not, we should rewrite the valid file.
    
    ## Brief change log
    
    Add a function `enableForceTruncateInProgressFile()` for BucketingSink to decide whether use the valid-length file. If it's true, the valid-length file wouldn't be produced. Instead, the valid in-progress file will be rewritten. 
    
    ## Verifying this change
    
    This change is a trivial work without any test coverage.
    


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/zhangxinyu1/flink force-recovery-file-in-bucketingsink

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/flink/pull/6108.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #6108
    
----
commit 7c5ba6d54658916e65c40fbbed646efce2c40645
Author: unknown <zh...@...>
Date:   2018-05-31T12:52:09Z

    allow to do truncate() when hadoop version is lower than 2.7

----


---

[GitHub] flink issue #6108: [FLINK-9367] [Streaming Connectors] Allow to do truncate(...

Posted by zhangxinyu1 <gi...@git.apache.org>.
Github user zhangxinyu1 commented on the issue:

    https://github.com/apache/flink/pull/6108
  
    @kl0u @joshfg @StephanEwen Could you please take a look at this pr?


---

[GitHub] flink pull request #6108: [FLINK-9367] [Streaming Connectors] Allow to do tr...

Posted by zhangxinyu1 <gi...@git.apache.org>.
Github user zhangxinyu1 closed the pull request at:

    https://github.com/apache/flink/pull/6108


---

[GitHub] flink issue #6108: [FLINK-9367] [Streaming Connectors] Allow to do truncate(...

Posted by StephanEwen <gi...@git.apache.org>.
Github user StephanEwen commented on the issue:

    https://github.com/apache/flink/pull/6108
  
    @kl0u please link the issue once you created it.
    
    This is currently very early, in design discussions between @kl0u, me, and @aljoscha.
    The main points about the rewrite are
      - Use Flink's FileSystem abstraction, to make it work with shaded S3, swift, etc and give an easier interface
      - Add a proper "ChunkedWriter" abstraction to the FileSystems, which handles write, persist-on-checkpoint, and rollback-to-checkpoint in a FileSystem specific way. For example, use truncate()/append() on POSIX and HDFS, use MultiPartUploads on S3, ...
      - Add support for gathering large chunks across checkpoints, to make Parquet and ORC compression more effective.


---

[GitHub] flink issue #6108: [FLINK-9367] [Streaming Connectors] Allow to do truncate(...

Posted by zhangxinyu1 <gi...@git.apache.org>.
Github user zhangxinyu1 commented on the issue:

    https://github.com/apache/flink/pull/6108
  
    @kl0u Great! I look forward for it. 
    About the bandwidth limitation, we hope jobs can read resource below the speed x bytes/sec.


---

[GitHub] flink issue #6108: [FLINK-9367] [Streaming Connectors] Allow to do truncate(...

Posted by zhangxinyu1 <gi...@git.apache.org>.
Github user zhangxinyu1 commented on the issue:

    https://github.com/apache/flink/pull/6108
  
    @StephanEwen Thanks for your reply. BTW, is their any issue about BucketingSink rewriting? We also want to use the BucketingSink which supports for parquet and orc. 


---

[GitHub] flink issue #6108: [FLINK-9367] [Streaming Connectors] Allow to do truncate(...

Posted by kl0u <gi...@git.apache.org>.
Github user kl0u commented on the issue:

    https://github.com/apache/flink/pull/6108
  
    Thanks for the useful input here @zhangxinyu1 and @StephanEwen. As soon as I have sth concrete I create the JIRA and post it here.


---

[GitHub] flink issue #6108: [FLINK-9367] [Streaming Connectors] Allow to do truncate(...

Posted by zhangxinyu1 <gi...@git.apache.org>.
Github user zhangxinyu1 commented on the issue:

    https://github.com/apache/flink/pull/6108
  
    @StephanEwen Thanks your reply. Is their any issue about BuckingSink rewriting? We also want to use the BuckingSink which support for parquet and orc. 


---

[GitHub] flink issue #6108: [FLINK-9367] [Streaming Connectors] Allow to do truncate(...

Posted by zhangxinyu1 <gi...@git.apache.org>.
Github user zhangxinyu1 commented on the issue:

    https://github.com/apache/flink/pull/6108
  
    @kl0u Thanks. Would you please consider to implement a BucketingSource which we can use it to read data from FileSystems? Besides we also care about the limit of bandwidth.


---

[GitHub] flink issue #6108: [FLINK-9367] [Streaming Connectors] Allow to do truncate(...

Posted by StephanEwen <gi...@git.apache.org>.
Github user StephanEwen commented on the issue:

    https://github.com/apache/flink/pull/6108
  
    Do you have a Hadoop version older than  2.7?
    
    We are currently attempting to rewrite the Bucketing Sink completely for better compatibility with S3 and with better support for Parquet / ORC. We were actually thinking to drop support for file systems that do not support `truncate()` - so getting this feedback would be good.


---

[GitHub] flink issue #6108: [FLINK-9367] [Streaming Connectors] Allow to do truncate(...

Posted by kl0u <gi...@git.apache.org>.
Github user kl0u commented on the issue:

    https://github.com/apache/flink/pull/6108
  
    @zhangxinyu1 as soon as this sink is ready, I believe that the existing File Source will be able to read the output of the Bucketing Sink. As far as bandwidth limitations are concerned, could you elaborate a bit on what you mean? You want to tell the source to read at speed X records/sec?


---