You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@beam.apache.org by GitBox <gi...@apache.org> on 2022/06/04 21:55:02 UTC

[GitHub] [beam] damccorm opened a new issue, #21222: No parallelism when using SDFBoundedSourceReader with Flink

damccorm opened a new issue, #21222:
URL: https://github.com/apache/beam/issues/21222

   Background: I am using TFX pipelines with Flink as the runner for Beam (flink session cluster using [flink-on-k8s-operator](https://github.com/GoogleCloudPlatform/flink-on-k8s-operator)). The Flink cluster has 2 taskmanagers with 16 cores each, and parallelism is set to 32. TFX components call `beam.io.ReadFromTFRecord` to load data, passing in a glob file pattern. I have a dataset of TFRecords split across 160 files. When I try to run the component, processing for all 160 files ends up in a single subtask in Flink, i.e. the parallelism is effectively 1. See below images:
   
   !https://i.imgur.com/ppba0AL.png!
   
   !https://i.imgur.com/rSTFATn.png!
   
    
    I have tried all manner of Beam/Flink options and different versions of Beam/Flink but the behaviour remains the same.
   
   Furthermore, the behaviour affects anything that uses `apache_beam.io.iobase.SDFBoundedSourceReader`, e.g. `apache_beam.io.parquetio.ReadFromParquet` also has the same issue. Either I'm missing some obscure setting in my configuration, or this is a bug with the Flink runner.
     
   
   Imported from Jira [BEAM-12915](https://issues.apache.org/jira/browse/BEAM-12915). Original Jira may contain additional context.
   Reported by: roganmorrow.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org