You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@beam.apache.org by GitBox <gi...@apache.org> on 2020/04/23 17:59:27 UTC

[GitHub] [beam] tvalentyn commented on issue #11086: [BEAM-8910] Make custom BQ source read from Avro

tvalentyn commented on issue #11086:
URL: https://github.com/apache/beam/pull/11086#issuecomment-618553284


   Thanks Pablo.
   1. Can we take another look at a big comment at the beginning of bigquery.py and see if it needs an update. It sounds like we now have two ways of reading and writing to BigQuery. Can we add a guidance to users when to use which, and what they will need to change when they switch from one to another?
   
   2. The same comment says: BigQuery IO requires values of BYTES datatype to be encoded using base64
   encoding when writing to BigQuery. When bytes are read from BigQuery they are
   returned as base64-encoded bytes. Does this apply to the new source? Sounds like not, so please clarify.
   3. What is the test story for BQ IO? I see several branches here: natve vs non-native IO, Avro vs Json. Which combinations are tested?
   4. Big Query tests are one of the major source of [postcommit flakiness](https://issues.apache.org/jira/issues/?jql=text~BigQuery%20AND%20summary~flaky%20AND%20status%3DOpen%20AND%20summary~Test)? What are the plans to address that? 
    
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org