You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2020/10/23 03:41:30 UTC

[GitHub] [flink] danny0405 commented on pull request #12919: [FLINK-16048][avro] Support read/write confluent schema registry avro…

danny0405 commented on pull request #12919:
URL: https://github.com/apache/flink/pull/12919#issuecomment-714889392


   > @danny0405 @dawidwys
   > Any reasons all the fields read and written by this format has prefix 'record_' ? (I'm using flink sql for this client)
   > I found responsible code probably here but still have problem with this solution:
   > https://github.com/apache/flink/blob/de87a2debde8546e6741390a81f43c032521c3c0/flink-formats/flink-avro/src/main/java/org/apache/flink/formats/avro/typeutils/AvroSchemaConverter.java#L365
   
   It's because of the current strategy to infer the Avro schema is convert from the `CREATE TABLE` DDL, and there is no way to get the record name here. So we put a constant `record` as a prefix. The record write out all have explicit field name, but the type should be compatible.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org