You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "Beam JIRA Bot (Jira)" <ji...@apache.org> on 2021/03/06 17:19:00 UTC

[jira] [Updated] (BEAM-11705) Write to bigquery always assigns unique insert id per row causing performance issue

     [ https://issues.apache.org/jira/browse/BEAM-11705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Beam JIRA Bot updated BEAM-11705:
---------------------------------
    Labels: stale-assigned  (was: )

> Write to bigquery always assigns unique insert id per row causing performance issue
> -----------------------------------------------------------------------------------
>
>                 Key: BEAM-11705
>                 URL: https://issues.apache.org/jira/browse/BEAM-11705
>             Project: Beam
>          Issue Type: Improvement
>          Components: io-py-gcp
>            Reporter: Ning Kang
>            Assignee: Pablo Estrada
>            Priority: P2
>              Labels: stale-assigned
>          Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> The `ignore_insert_id` argument in BigQuery IO Connector
> https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/gcp/bigquery.py#L1471 does not take effect.
> Because the implementation of sending insert rows request always uses an auto generated uuid even when the insert_ids is set to None when `ignore_insert_id` is True: https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/gcp/bigquery_tools.py#L1062
> The implementation should explicitly set insert_id as None instead of using a generated uuid, an example: https://github.com/googleapis/python-bigquery/blob/master/samples/table_insert_rows_explicit_none_insert_ids.py#L33
> An unique insert id per row would make the streaming inserts very slow.
> Additionally, the `DEFAULT_SHARDS_PER_DESTINATION` doesn't seem to take any effect when `ignore_insert_id` is True in the implementation because it skipped the `ReshufflePerKey` (https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/gcp/bigquery.py#L1422). When `ignore_insert_id` is True, we seem to lost the batch size control?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)