You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@avro.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2018/02/28 20:49:00 UTC

[jira] [Commented] (AVRO-1335) C++ should support field default values

    [ https://issues.apache.org/jira/browse/AVRO-1335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16381027#comment-16381027 ] 

ASF GitHub Bot commented on AVRO-1335:
--------------------------------------

vimota commented on issue #241: AVRO-1335: Adds C++ support for default values in schema serializatio…
URL: https://github.com/apache/avro/pull/241#issuecomment-369377545
 
 
   Any updates on this?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


> C++ should support field default values
> ---------------------------------------
>
>                 Key: AVRO-1335
>                 URL: https://issues.apache.org/jira/browse/AVRO-1335
>             Project: Avro
>          Issue Type: Improvement
>          Components: c++
>    Affects Versions: 1.7.4
>            Reporter: Bin Guo
>            Priority: Major
>         Attachments: AVRO-1335.patch
>
>
> We found that resolvingDecoder could not provide bidirectional compatibility between different version of schemas.
> Especially for records, for example:
> {code:title=First schema}
> {
>     "type": "record",
>     "name": "TestRecord",
>     "fields": [
>         {
>             "name": "MyData",
> 			"type": {
> 				"type": "record",
> 				"name": "SubData",
> 				"fields": [
> 					{
> 						"name": "Version1",
> 						"type": "string"
> 					}
> 				]
> 			}
>         },
> 	{
>             "name": "OtherData",
>             "type": "string"
>         }
>     ]
> }
> {code}
> {code:title=Second schema}
> {
>     "type": "record",
>     "name": "TestRecord",
>     "fields": [
>         {
>             "name": "MyData",
> 			"type": {
> 				"type": "record",
> 				"name": "SubData",
> 				"fields": [
> 					{
> 						"name": "Version1",
> 						"type": "string"
> 					},
> 					{
> 						"name": "Version2",
> 						"type": "string"
> 					}
> 				]
> 			}
>         },
> 	{
>             "name": "OtherData",
>             "type": "string"
>         }
>     ]
> }
> {code}
> Say, node A knows only the first schema and node B knows the second schema, and the second schema has more fields. 
> Any data generated by node B can be resolved by first schema 'cause the additional field is marked as skipped.
> But data generated by node A can not be resolved by second schema and throws an exception *"Don't know how to handle excess fields for reader."*
> This is because data is resolved exactly according to the auto-generated codec_traits which trying to read the excess field.
> The problem is we just can not only ignore the excess field in record, since the data after the troublesome record also needs to be resolved.
> Actually this problem stucked us for a very long time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)