You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "Jan (Jira)" <ji...@apache.org> on 2021/09/29 22:38:00 UTC

[jira] [Created] (BEAM-12986) WriteTables leaves behind temporary tables on job failure

Jan created BEAM-12986:
--------------------------

             Summary: WriteTables leaves behind temporary tables on job failure
                 Key: BEAM-12986
                 URL: https://issues.apache.org/jira/browse/BEAM-12986
             Project: Beam
          Issue Type: Improvement
          Components: extensions-java-gcp, io-java-gcp
    Affects Versions: 2.29.0
            Reporter: Jan


I'm running a job that writes to a BigQuery table using `BigQueryIO.writeTableRows().to(
new SerializableFunction<ValueInSingleWindow<TableRow>, TableDestination>)`.
 
I'm noticing that when my job fails, it leaves behind temporary tables (`beam_bq_job_LOAD_*`) in the destination dataset. These tables are created by load jobs started here:
 
[https://github.com/apache/beam/blob/master/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/WriteTables.java#L273-L284)|https://github.com/apache/beam/blob/master/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/WriteTables.java#L273-L284),]
 
I'd like to specify a temporary dataset for these load job result tables, but I don't see a way to specify one using the Java SDK. It seems like the load job destination is inferred by changing the table id of the final destination:
 
[https://github.com/apache/beam/blob/master/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/WriteTables.java#L255]
 
which makes me think that the configuration I want to set doesn't exist. Is there a workaround to avoid having these tables be left around when the job fails? Could the option be added?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)