You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@avro.apache.org by "Bin Guo (JIRA)" <ji...@apache.org> on 2013/05/21 05:17:16 UTC

[jira] [Updated] (AVRO-1335) ResolvingDecoder should provide bidirectional compatibility between different version of schemas

     [ https://issues.apache.org/jira/browse/AVRO-1335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Bin Guo updated AVRO-1335:
--------------------------

    Description: 
We found that resolvingDecoder could not provide bidirectional compatibility between different version of schemas.

Especially for records, for example:

{code:title=First schema}
{
    "type": "record",
    "name": "TestRecord",
    "fields": [
        {
            "name": "MyData",
			"type": {
				"type": "record",
				"name": "SubData",
				"fields": [
					{
						"name": "Version1",
						"type": "string"
					}
				]
			}
        },
	{
            "name": "OtherData",
            "type": "string"
        }
    ]
}
{code}

{code:title=Second schema}
{
    "type": "record",
    "name": "TestRecord",
    "fields": [
        {
            "name": "MyData",
			"type": {
				"type": "record",
				"name": "SubData",
				"fields": [
					{
						"name": "Version1",
						"type": "string"
					},
					{
						"name": "Version2",
						"type": "string"
					}
				]
			}
        },
	{
            "name": "OtherData",
            "type": "string"
        }
    ]
}
{code}

Say, node A knows only the first schema and node B knows the second schema, and the second schema has more fields. 
Any data generated by node B can be resolved by first schema 'cause the additional field is marked as skipped.
But data generated by node A can not be resolved by second schema and throws an exception *"Don't know how to handle excess fields for reader."*
This is because data is resolved exactly according to the auto-generated codec_traits which trying to read the excess field.
The problem is we just can not only ignore the excess field in record, since the data after the troublesome record also needs to be resolved.
Actually this problem stucked us for a very long time.



  was:
We found that resolvingDecoder could not provide bidirectional compatibility between different version of schemas.

Especially for records, for example:

{code:title=First schema}
{
    "type": "record",
    "name": "TestRecord",
    "fields": [
        {
            "name": "MyData",
			"type": {
				"type": "record",
				"name": "SubData",
				"fields": [
					{
						"name": "Version1",
						"type": "string"
					}
				]
			}
        },
	{
            "name": "OtherData",
            "type": "string"
        }
    ]
}
{code}

{code:title=Second schema}
{
    "type": "record",
    "name": "TestRecord",
    "fields": [
        {
            "name": "MyData",
			"type": {
				"type": "record",
				"name": "SubData",
				"fields": [
					{
						"name": "Version1",
						"type": "string"
					},
					{
						"name": "Version2",
						"type": "string"
					}
				]
			}
        },
	{
            "name": "OtherData",
            "type": "string"
        }
    ]
}
{code}

Say, node A knows only the first schema and node B knows the second schema, and the second schema has more fields. 
Any data generated by node B can be resolved by first schema 'cause the additional field is marked as skipped.
But data generated by node A can not be resolved by second schema and throw an exception *"Don't know how to handle excess fields for reader."*
This is because data is resolved exactly according to the auto-generated codec_traits which trying to read the excess field.
The problem is we just can not only ignore the excess field in record, since the data after the troublesome record also needs to be resolved.
Actually this problem stucked us for a very long time.



    
> ResolvingDecoder should provide bidirectional compatibility between different version of schemas
> ------------------------------------------------------------------------------------------------
>
>                 Key: AVRO-1335
>                 URL: https://issues.apache.org/jira/browse/AVRO-1335
>             Project: Avro
>          Issue Type: Improvement
>          Components: c++
>    Affects Versions: 1.7.4
>            Reporter: Bin Guo
>
> We found that resolvingDecoder could not provide bidirectional compatibility between different version of schemas.
> Especially for records, for example:
> {code:title=First schema}
> {
>     "type": "record",
>     "name": "TestRecord",
>     "fields": [
>         {
>             "name": "MyData",
> 			"type": {
> 				"type": "record",
> 				"name": "SubData",
> 				"fields": [
> 					{
> 						"name": "Version1",
> 						"type": "string"
> 					}
> 				]
> 			}
>         },
> 	{
>             "name": "OtherData",
>             "type": "string"
>         }
>     ]
> }
> {code}
> {code:title=Second schema}
> {
>     "type": "record",
>     "name": "TestRecord",
>     "fields": [
>         {
>             "name": "MyData",
> 			"type": {
> 				"type": "record",
> 				"name": "SubData",
> 				"fields": [
> 					{
> 						"name": "Version1",
> 						"type": "string"
> 					},
> 					{
> 						"name": "Version2",
> 						"type": "string"
> 					}
> 				]
> 			}
>         },
> 	{
>             "name": "OtherData",
>             "type": "string"
>         }
>     ]
> }
> {code}
> Say, node A knows only the first schema and node B knows the second schema, and the second schema has more fields. 
> Any data generated by node B can be resolved by first schema 'cause the additional field is marked as skipped.
> But data generated by node A can not be resolved by second schema and throws an exception *"Don't know how to handle excess fields for reader."*
> This is because data is resolved exactly according to the auto-generated codec_traits which trying to read the excess field.
> The problem is we just can not only ignore the excess field in record, since the data after the troublesome record also needs to be resolved.
> Actually this problem stucked us for a very long time.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira