You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@beam.apache.org by "Daniel Halperin (JIRA)" <ji...@apache.org> on 2016/08/30 21:17:20 UTC

[jira] [Updated] (BEAM-92) Data-dependent sinks

     [ https://issues.apache.org/jira/browse/BEAM-92?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Daniel Halperin updated BEAM-92:
--------------------------------
    Assignee: Vikas Kedigehalli

> Data-dependent sinks
> --------------------
>
>                 Key: BEAM-92
>                 URL: https://issues.apache.org/jira/browse/BEAM-92
>             Project: Beam
>          Issue Type: New Feature
>          Components: beam-model
>            Reporter: Eugene Kirpichov
>            Assignee: Vikas Kedigehalli
>
> Current sink API writes all data to a single destination, but there are many use cases where different pieces of data need to be routed to different destinations where the set of destinations is data-dependent (so can't be implemented with a Partition transform).
> One internally discussed proposal was an API of the form:
> {code}
> PCollection<Void> PCollection<T>.apply(
>     Write.using(DoFn<T, SinkT> where,
>                 MapFn<SinkT, WriteOperation<WriteResultT, T>> how)
> {code}
> so an item T gets written to a destination (or multiple destinations) determined by "where"; and the writing strategy is determined by "how" that produces a WriteOperation (current API - global init/write/global finalize hooks) for any given destination.
> This API also has other benefits:
> * allows the SinkT to be computed dynamically (in "where"), rather than specified at pipeline construction time
> * removes the necessity for a Sink class entirely
> * is sequenceable w.r.t. downstream transforms (you can stick transforms onto the returned PCollection<Void>, while the current Write.to() returns a PDone)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)