You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "Beam JIRA Bot (Jira)" <ji...@apache.org> on 2021/01/09 17:13:05 UTC

[jira] [Commented] (BEAM-11134) Using WriteToBigQuery FILE_LOADS in a streaming pipeline does not delete temp tables

    [ https://issues.apache.org/jira/browse/BEAM-11134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17261912#comment-17261912 ] 

Beam JIRA Bot commented on BEAM-11134:
--------------------------------------

This issue was marked "stale-P2" and has not received a public comment in 14 days. It is now automatically moved to P3. If you are still affected by it, you can comment and move it back to P2.

> Using WriteToBigQuery FILE_LOADS in a streaming pipeline does not delete temp tables
> ------------------------------------------------------------------------------------
>
>                 Key: BEAM-11134
>                 URL: https://issues.apache.org/jira/browse/BEAM-11134
>             Project: Beam
>          Issue Type: Bug
>          Components: io-py-gcp
>    Affects Versions: 2.24.0
>         Environment: Running on DataflowRunner on GCP Dataflow.
>            Reporter: Luke Kavenagh
>            Priority: P3
>              Labels: beam, dataflow, gcp, python
>
> Using the {{FILE_LOADS}} method in {{WriteToBigQuery}}, it initially appears to work, sending load jobs, which then (at least sometimes) succeed and the data goes into the correct tables.
> But the temporary tables that were created never get deleted. Often the data was just never even copied from the temp tables to the destination.
> In the code ([https://github.com/apache/beam/blob/aca9099acca969dc217ab183782e5270347cd354/sdks/python/apache_beam/io/gcp/bigquery_file_loads.py#L846)|https://github.com/apache/beam/blob/aca9099acca969dc217ab183782e5270347cd354/sdks/python/apache_beam/io/gcp/bigquery_file_loads.py#L846]
> ...it appears that after the load jobs, beam should wait for them to finish, then copy the data from the temp tables and delete them; however, it seems that when used with a streaming pipeline, it doesn't complete these steps.
>  
> In case it's not clear, this is for the python SDK.
>  
> For reference: https://stackoverflow.com/questions/64526500/using-writetobigquery-file-loads-in-a-streaming-pipeline-just-creates-a-lot-of-t/64543619#64543619



--
This message was sent by Atlassian Jira
(v8.3.4#803005)