You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "Chun Yang (Jira)" <ji...@apache.org> on 2021/03/05 18:06:00 UTC

[jira] [Work stopped] (BEAM-11277) WriteToBigQuery with batch file loads does not respect schema update options when there are multiple load jobs

     [ https://issues.apache.org/jira/browse/BEAM-11277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Work on BEAM-11277 stopped by Chun Yang.
----------------------------------------
> WriteToBigQuery with batch file loads does not respect schema update options when there are multiple load jobs
> --------------------------------------------------------------------------------------------------------------
>
>                 Key: BEAM-11277
>                 URL: https://issues.apache.org/jira/browse/BEAM-11277
>             Project: Beam
>          Issue Type: Bug
>          Components: io-py-gcp, runner-dataflow
>    Affects Versions: 2.21.0, 2.24.0, 2.25.0, 2.28.0
>            Reporter: Chun Yang
>            Assignee: Chun Yang
>            Priority: P2
>         Attachments: repro.py
>
>          Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> When multiple load jobs are needed to write data to a destination table, e.g., when the data is spread over more than [10,000|https://cloud.google.com/bigquery/quotas#load_jobs] URIs, WriteToBigQuery in FILE_LOADS mode will write data into temporary tables and then copy the temporary tables into the destination table.
> When WriteToBigQuery is used with {{write_disposition=BigQueryDisposition.WRITE_APPEND}} and {{additional_bq_parameters=\{"schemaUpdateOptions": ["ALLOW_FIELD_ADDITION"]\}}}, the schema update options are not respected by the jobs that copy data from temporary tables into the destination table. The effect is that for small jobs (<10K source URIs), schema field addition is allowed, however, if the job is scaled to >10K source URIs, then schema field addition will fail with an error such as:
> {code:none}Provided Schema does not match Table project:dataset.table. Cannot add fields (field: field_name){code}
> I've been able to reproduce this issue with Python 3.7 and DataflowRunner on Beam 2.21.0 and Beam 2.25.0. I could not reproduce the issue with DirectRunner. A minimal reproducible example is attached.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)