You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@avro.apache.org by "Bin Guo (JIRA)" <ji...@apache.org> on 2013/05/22 05:05:20 UTC

[jira] [Commented] (AVRO-1335) C++ should support field default values

    [ https://issues.apache.org/jira/browse/AVRO-1335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13663710#comment-13663710 ] 

Bin Guo commented on AVRO-1335:
-------------------------------

Ok, I've renamed the summary.
Furthermore, there is another thing about bidirectional schema evolution, but not a fatal one.
If a union has new element, the old reader will throw an exception when writer writes the very element. 
That's quite normal but, in C++,  there is no difference between exceptions , except for the descriptions.
It would be better that exceptions are different inheritors of the current one, so that we can catch exceptions precisely(e.g. "union has new element") and then handle them properly.
                
> C++ should support field default values
> ---------------------------------------
>
>                 Key: AVRO-1335
>                 URL: https://issues.apache.org/jira/browse/AVRO-1335
>             Project: Avro
>          Issue Type: Improvement
>          Components: c++
>    Affects Versions: 1.7.4
>            Reporter: Bin Guo
>
> We found that resolvingDecoder could not provide bidirectional compatibility between different version of schemas.
> Especially for records, for example:
> {code:title=First schema}
> {
>     "type": "record",
>     "name": "TestRecord",
>     "fields": [
>         {
>             "name": "MyData",
> 			"type": {
> 				"type": "record",
> 				"name": "SubData",
> 				"fields": [
> 					{
> 						"name": "Version1",
> 						"type": "string"
> 					}
> 				]
> 			}
>         },
> 	{
>             "name": "OtherData",
>             "type": "string"
>         }
>     ]
> }
> {code}
> {code:title=Second schema}
> {
>     "type": "record",
>     "name": "TestRecord",
>     "fields": [
>         {
>             "name": "MyData",
> 			"type": {
> 				"type": "record",
> 				"name": "SubData",
> 				"fields": [
> 					{
> 						"name": "Version1",
> 						"type": "string"
> 					},
> 					{
> 						"name": "Version2",
> 						"type": "string"
> 					}
> 				]
> 			}
>         },
> 	{
>             "name": "OtherData",
>             "type": "string"
>         }
>     ]
> }
> {code}
> Say, node A knows only the first schema and node B knows the second schema, and the second schema has more fields. 
> Any data generated by node B can be resolved by first schema 'cause the additional field is marked as skipped.
> But data generated by node A can not be resolved by second schema and throws an exception *"Don't know how to handle excess fields for reader."*
> This is because data is resolved exactly according to the auto-generated codec_traits which trying to read the excess field.
> The problem is we just can not only ignore the excess field in record, since the data after the troublesome record also needs to be resolved.
> Actually this problem stucked us for a very long time.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira