You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "Udit Mehrotra (Jira)" <ji...@apache.org> on 2021/08/25 08:45:00 UTC

[jira] [Updated] (HUDI-1683) When using hudi on flink write data to the HDFS ClassCastException: scala. Tuple2 always be cast to org.apache.hudi.com mon. Util. Collection. The Pair

     [ https://issues.apache.org/jira/browse/HUDI-1683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Udit Mehrotra updated HUDI-1683:
--------------------------------
    Fix Version/s:     (was: 0.9.0)
                   0.10.0

> When using hudi on flink write data to the HDFS ClassCastException: scala. Tuple2 always be cast to org.apache.hudi.com mon. Util. Collection. The Pair
> -------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HUDI-1683
>                 URL: https://issues.apache.org/jira/browse/HUDI-1683
>             Project: Apache Hudi
>          Issue Type: Bug
>          Components: Flink Integration
>    Affects Versions: 0.9.0
>         Environment: CentOS7:
>         hadoop-3.1.0
>         flink-1.12.2
>            Reporter: MengYao
>            Priority: Critical
>              Labels: BaseFlinkCommitActionExecutor
>             Fix For: 0.10.0
>
>         Attachments: Hudi-Flink-CaseClassException_20210312102417.png
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Caused by: java.lang{color:red}.ClassCastException: scala.Tuple2 cannot be cast to org.apache.hudi.common.util.collection.Pair{color}
> 	at org.apache.hudi.table.action.commit.UpsertPartitioner.getPartition(UpsertPartitioner.java:266)
> 	at org.apache.hudi.table.action.commit.BaseFlinkCommitActionExecutor.lambda$partition$2(BaseFlinkCommitActionExecutor.java:155)
> 	at java.util.stream.Collectors.lambda$groupingBy$45(Collectors.java:907)
> 	at java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
> 	at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> 	at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384)
> 	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> 	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> 	at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> 	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> 	at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
> 	at org.apache.hudi.table.action.commit.BaseFlinkCommitActionExecutor.partition(BaseFlinkCommitActionExecutor.java:155)
> 	at org.apache.hudi.table.action.commit.BaseFlinkCommitActionExecutor.execute(BaseFlinkCommitActionExecutor.java:115)
> 	at org.apache.hudi.table.action.commit.BaseFlinkCommitActionExecutor.execute(BaseFlinkCommitActionExecutor.java:68)
> 	at org.apache.hudi.table.action.commit.AbstractWriteHelper.write(AbstractWriteHelper.java:55)
> 	... 33 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)