You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@beam.apache.org by "Eugene Kirpichov (JIRA)" <ji...@apache.org> on 2016/03/31 02:33:25 UTC

[jira] [Commented] (BEAM-68) Support for limiting parallelism of a step

    [ https://issues.apache.org/jira/browse/BEAM-68?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15219126#comment-15219126 ] 

Eugene Kirpichov commented on BEAM-68:
--------------------------------------

I think the two use cases are quite different and need different APIs. The first use case is about the expected outputs of the pipeline (i.e. runner-agnostic); the second is about the execution (implemented in a runner-specific way).

I think for the first case we just need to come up with a convenient way to encode the sharding needs of custom sinks (e.g. output filename for a file-based sink, or say, shard of a pubsub topic for a sink that outputs to some pubsub system), and it can probably be done as a PTransform. I think it can actually be done using the "data-dependent sinks" API (route the data to a destination shard, one sink per shard), BEAM-92.

The second case requires support in the Beam model.

I'm inclined to reopen BEAM-159, let me know what you think.

> Support for limiting parallelism of a step
> ------------------------------------------
>
>                 Key: BEAM-68
>                 URL: https://issues.apache.org/jira/browse/BEAM-68
>             Project: Beam
>          Issue Type: New Feature
>          Components: beam-model
>            Reporter: Daniel Halperin
>
> Users may want to limit the parallelism of a step. Two classic uses cases are:
> - User wants to produce at most k files, so sets TextIO.Write.withNumShards(k).
> - External API only supports k QPS, so user sets a limit of k/(expected QPS/step) on the ParDo that makes the API call.
> Unfortunately, there is no way to do this effectively within the Beam model. A GroupByKey with exactly k keys will guarantee that only k elements are produced, but runners are free to break fusion in ways that each element may be processed in parallel later.
> To implement this functionaltiy, I believe we need to add this support to the Beam Model.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)