You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "Chamikara Madhusanka Jayalath (Jira)" <ji...@apache.org> on 2019/10/08 18:34:00 UTC

[jira] [Created] (BEAM-8367) Python BigQuery sink should use unique IDs for mode STREAMING_INSERTS

Chamikara Madhusanka Jayalath created BEAM-8367:
---------------------------------------------------

             Summary: Python BigQuery sink should use unique IDs for mode STREAMING_INSERTS
                 Key: BEAM-8367
                 URL: https://issues.apache.org/jira/browse/BEAM-8367
             Project: Beam
          Issue Type: Improvement
          Components: sdk-py-core
            Reporter: Chamikara Madhusanka Jayalath
            Assignee: Pablo Estrada


Unique IDs ensure (best effort) that writes to BigQuery are idempotent, for example, we don't write the same record twice in a VM failure.

 

Currently Python BQ sink insert BQ IDs here but they'll be re-generated in a VM failure resulting in data duplication.

[https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/gcp/bigquery.py#L766]

 

Correct fix is to do a Reshuffle to checkpoint unique IDs once they are generated, similar to how Java BQ sink operates.

[https://github.com/apache/beam/blob/dcf6ad301069e4d2cfaec5db6b178acb7bb67f49/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/StreamingWriteTables.java#L225]

 

Pablo, can you do an initial assessment here ?

I think this is a relatively small fix but I might be wrong.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)