You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@beam.apache.org by pa...@apache.org on 2020/01/24 22:00:19 UTC
[beam] branch master updated: Fix comments on bigquery.py *
beam.io.gcp.WriteToBigQuery -> beam.io.gcp.bigquery.WriteToBigQuery *
PROJECT.DATASET.TABLE -> PROJECT:DATASET.TABLE
This is an automated email from the ASF dual-hosted git repository.
pabloem pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/beam.git
The following commit(s) were added to refs/heads/master by this push:
new af32cdb Fix comments on bigquery.py * beam.io.gcp.WriteToBigQuery -> beam.io.gcp.bigquery.WriteToBigQuery * PROJECT.DATASET.TABLE -> PROJECT:DATASET.TABLE
new 4590b1c Merge pull request #10657 from kjmrknsn/fix-writetobigquery-doc
af32cdb is described below
commit af32cdbf11df0800e764c2cf5f2f6cc2a5e8bfba
Author: Keiji Yoshida <ke...@google.com>
AuthorDate: Wed Jan 22 23:19:14 2020 +0900
Fix comments on bigquery.py
* beam.io.gcp.WriteToBigQuery -> beam.io.gcp.bigquery.WriteToBigQuery
* PROJECT.DATASET.TABLE -> PROJECT:DATASET.TABLE
---
sdks/python/apache_beam/io/gcp/bigquery.py | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/sdks/python/apache_beam/io/gcp/bigquery.py b/sdks/python/apache_beam/io/gcp/bigquery.py
index 78cd98d..ba79e70 100644
--- a/sdks/python/apache_beam/io/gcp/bigquery.py
+++ b/sdks/python/apache_beam/io/gcp/bigquery.py
@@ -106,13 +106,13 @@ computed at pipeline runtime, one may do something like the following::
]))
table_names = (p | beam.Create([
- ('error', 'my_project.dataset1.error_table_for_today'),
- ('user_log', 'my_project.dataset1.query_table_for_today'),
+ ('error', 'my_project:dataset1.error_table_for_today'),
+ ('user_log', 'my_project:dataset1.query_table_for_today'),
])
table_names_dict = beam.pvalue.AsDict(table_names)
- elements | beam.io.gcp.WriteToBigQuery(
+ elements | beam.io.gcp.bigquery.WriteToBigQuery(
table=lambda row, table_dict: table_dict[row['type']],
table_side_inputs=(table_names_dict,))
@@ -146,7 +146,7 @@ This allows to provide different schemas for different tables::
{'type': 'user_log', 'timestamp': '12:34:59', 'query': 'flu symptom'},
]))
- elements | beam.io.gcp.WriteToBigQuery(
+ elements | beam.io.gcp.bigquery.WriteToBigQuery(
table=compute_table_name,
schema=lambda table: (errors_schema
if 'errors' in table
@@ -183,8 +183,8 @@ clustering properties, one would do the following::
{'country': 'canada', 'timestamp': '12:34:59', 'query': 'influenza'},
]))
- elements | beam.io.gcp.WriteToBigQuery(
- table='project_name1.dataset_2.query_events_table',
+ elements | beam.io.gcp.bigquery.WriteToBigQuery(
+ table='project_name1:dataset_2.query_events_table',
additional_bq_parameters=additional_bq_parameters)
Much like the schema case, the parameter with `additional_bq_parameters` can