You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flink.apache.org by "Bowen Li (Jira)" <ji...@apache.org> on 2020/03/26 22:50:00 UTC

[jira] [Created] (FLINK-16816) planner doesn't parse timestamp array correctly

Bowen Li created FLINK-16816:
--------------------------------

             Summary: planner doesn't parse timestamp array correctly
                 Key: FLINK-16816
                 URL: https://issues.apache.org/jira/browse/FLINK-16816
             Project: Flink
          Issue Type: Bug
          Components: Table SQL / Planner, Table SQL / Runtime
            Reporter: Bowen Li
            Assignee: Kurt Young
             Fix For: 1.11.0


planner doesn't parse timestamp array correctly.

 

Repro: 

In a input format (like JBDCInputFormat)'s \{{nextRecord(Row)}} API
 # when setting a timestamp datum as java.sql.Timestamp, it works fine
 # when setting an array of timestamp datums as java.sql.Timestamp[], it breaks and below is the strack trace

 
{code:java}
/Caused by: java.lang.ClassCastException: java.sql.Timestamp cannot be cast to java.time.LocalDateTime
	at org.apache.flink.table.dataformat.DataFormatConverters$LocalDateTimeConverter.toInternalImpl(DataFormatConverters.java:748)
	at org.apache.flink.table.dataformat.DataFormatConverters$ObjectArrayConverter.toBinaryArray(DataFormatConverters.java:1110)
	at org.apache.flink.table.dataformat.DataFormatConverters$ObjectArrayConverter.toInternalImpl(DataFormatConverters.java:1093)
	at org.apache.flink.table.dataformat.DataFormatConverters$ObjectArrayConverter.toInternalImpl(DataFormatConverters.java:1068)
	at org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toInternal(DataFormatConverters.java:344)
	at org.apache.flink.table.dataformat.DataFormatConverters$RowConverter.toInternalImpl(DataFormatConverters.java:1377)
	at org.apache.flink.table.dataformat.DataFormatConverters$RowConverter.toInternalImpl(DataFormatConverters.java:1365)
	at org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toInternal(DataFormatConverters.java:344)
	at SourceConversion$1.processElement(Unknown Source)
	at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:714)
	at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:689)
	at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:669)
	at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52)
	at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)
	at org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
	at org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:93)
	at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
	at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
	at org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:208)
{code}

seems that planner runtime handles java.sql.Timetamp in these two cases differently



--
This message was sent by Atlassian Jira
(v8.3.4#803005)