You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2020/05/01 02:04:00 UTC

[jira] [Work logged] (BEAM-8910) Use AVRO instead of JSON in BigQuery bounded source.

     [ https://issues.apache.org/jira/browse/BEAM-8910?focusedWorklogId=429322&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-429322 ]

ASF GitHub Bot logged work on BEAM-8910:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 01/May/20 02:03
            Start Date: 01/May/20 02:03
    Worklog Time Spent: 10m 
      Work Description: pabloem commented on pull request #11086:
URL: https://github.com/apache/beam/pull/11086#issuecomment-622212825


   > Thanks Pablo.
   > 
   > 1. Can we take another look at a big comment at the beginning of bigquery.py and see if it needs an update. It sounds like we now have two ways of reading and writing to BigQuery. Can we add a guidance to users when to use which, and what they will need to change when they switch from one to another?
   > 2. The same comment says: BigQuery IO requires values of BYTES datatype to be encoded using base64
   >    encoding when writing to BigQuery. When bytes are read from BigQuery they are
   >    returned as base64-encoded bytes. Does this apply to the new source? Sounds like not, so please clarify.
   
   Ah great observation. Thanks Valentyn. I've rephrased that. LMK what you think now
   
   > 3. What is the test story for BQ IO? I see several branches here: natve vs non-native IO, Avro vs Json. Which combinations are tested?
   
   There are tests for pretty much all the variants. There are:
   - Native: `BigQuerySource`. This is widely tested and used.
   - Custom (AVRO) `ReadFromBigQuery`. This is tested by almost all of the BQ-related PostCommits.
   - Custom (JSON) `ReadFromBigQuery(use_json_exports=True)`. This is tested in a couple of the BQ-related PostCommits (note `BigQueryQueryToTableIT.test_new_types_json` checks all the special types including bytes, and datetime-related types).
   
   > 4. Big Query tests are one of the major source of [postcommit flakiness](https://issues.apache.org/jira/issues/?jql=text~BigQuery%20AND%20summary~flaky%20AND%20status%3DOpen%20AND%20summary~Test)? What are the plans to address that?
   
   These are all Write tests. The Read from BQ tests are less troublesome, likely due to the fact that the transform is less complex, and asserts are kept in the pipeline (vs BigQuery matchers). I'm happy to own all breakages related to thistest - though I expect few compared to the WriteToBQ issues.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 429322)
    Time Spent: 9h 10m  (was: 9h)

> Use AVRO instead of JSON in BigQuery bounded source.
> ----------------------------------------------------
>
>                 Key: BEAM-8910
>                 URL: https://issues.apache.org/jira/browse/BEAM-8910
>             Project: Beam
>          Issue Type: Improvement
>          Components: sdk-py-core
>            Reporter: Kamil Wasilewski
>            Assignee: Pablo Estrada
>            Priority: Minor
>          Time Spent: 9h 10m
>  Remaining Estimate: 0h
>
> The proposed BigQuery bounded source in Python SDK (see PR: [https://github.com/apache/beam/pull/9772)] uses a BigQuery export job to take a snapshot of the table and read from each produced JSON file. A performance improvement can be gain by switching to AVRO instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)