You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "hehuiyuan (Jira)" <ji...@apache.org> on 2019/12/16 02:48:00 UTC

[jira] [Updated] (FLINK-15158) Why convert integer to bigdecimal for formart-json when kafka is used

     [ https://issues.apache.org/jira/browse/FLINK-15158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

hehuiyuan updated FLINK-15158:
------------------------------
    Attachment: image-2019-12-16-10-47-23-565.png

> Why convert integer to bigdecimal for formart-json when kafka is used
> ---------------------------------------------------------------------
>
>                 Key: FLINK-15158
>                 URL: https://issues.apache.org/jira/browse/FLINK-15158
>             Project: Flink
>          Issue Type: Wish
>          Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>            Reporter: hehuiyuan
>            Priority: Major
>         Attachments: image-2019-12-16-10-47-23-565.png, image-2019-12-16-10-47-43-437.png
>
>
> For example , 
> I have a table  `table1` :
> root
>  |-- name: STRING
>  |-- age: INT
>  |-- sex: STRING
>  
> then , I want to `insert into kafka select * form table1` :
> jsonschame: 
> {type:'object',properties:\{name: { type: 'string' },age: \{ type: 'integer' },sex: \{ type: 'string' }}}
>  
> ```
> descriptor.withFormat(new Json().jsonSchema(jsonSchema)).withSchema(schema);
> ```
>  
> Exception in thread "main" org.apache.flink.table.api.ValidationException: Field types of query result and registered TableSink [sink_example2] do not match.Exception in thread "main" org.apache.flink.table.api.ValidationException: Field types of query result and registered TableSink [sink_example2] do not match.
> *Query result schema: [name: String, age: Integer, sex: String]*
> *TableSink schema:    [name: String, age: BigDecimal, sex: String]* at org.apache.flink.table.sinks.TableSinkUtils$.validateSink(TableSinkUtils.scala:65) at org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:156) at org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:155) at scala.Option.map(Option.scala:146) 
>  
> I know that the type of integer in the jsonschema is convert to BigDecimal .But for the above scenario, does this have to be forced to be decimal?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)