You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "Luke Cwik (Jira)" <ji...@apache.org> on 2020/05/15 17:53:00 UTC
[jira] [Updated] (BEAM-10012) Update Python SDK to construct
Dataflow job requests from Beam runner API protos
[ https://issues.apache.org/jira/browse/BEAM-10012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Luke Cwik updated BEAM-10012:
-----------------------------
Status: Open (was: Triage Needed)
> Update Python SDK to construct Dataflow job requests from Beam runner API protos
> --------------------------------------------------------------------------------
>
> Key: BEAM-10012
> URL: https://issues.apache.org/jira/browse/BEAM-10012
> Project: Beam
> Issue Type: New Feature
> Components: sdk-py-core
> Reporter: Chamikara Madhusanka Jayalath
> Priority: Major
>
> Currently, portable runners are expected to do following when constructing a runner specific job.
> SDK specific job graph -> Beam runner API proto -> Runner specific job request
> Portable Spark and Flink follow this model.
> Dataflow does following.
> SDK specific job graph -> Runner specific job request
> Beam runner API proto -> Upload to GCS -> Download at workers
>
> We should update Dataflow to follow the prior path which is expected to be followed by all portable runners.
> This will simplify the cross-language transforms job construction logic for Dataflow.
> We can probably start this by just implementing this for Python SDK for portions of pipeline received by expanding external transforms.
> cc: [~lcwik] [~robertwb]
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)