You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "miomiocat (Jira)" <ji...@apache.org> on 2022/04/20 11:22:00 UTC

[jira] [Created] (HUDI-3923) Fix cast exception while reading boolean type of partitioned field

miomiocat created HUDI-3923:
-------------------------------

             Summary: Fix cast exception while reading boolean type of partitioned field
                 Key: HUDI-3923
                 URL: https://issues.apache.org/jira/browse/HUDI-3923
             Project: Apache Hudi
          Issue Type: Bug
            Reporter: miomiocat


Fix cast exception while reading boolean type of partitioned field

 
{code:java}
create table hudi_partitioned_bool(  id int,   name string,   part boolean) using hudiPARTITIONED BY (part)tblproperties (  type = 'cow',  primaryKey = 'id',  preCombineField = 'id');

insert into hudi_partitioned_bool select 1, "111", 1;insert into hudi_partitioned_bool select 2, "222", 0; {code}
 

 

 

 
{code:java}
Caused by: java.lang.RuntimeException: Failed to cast value `false` to `BooleanType` for partition column `part`
    at org.apache.spark.sql.execution.datasources.Spark3ParsePartitionUtil.$anonfun$parsePartition$2(Spark3ParsePartitionUtil.scala:72)
    at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
    at scala.collection.TraversableLike.map(TraversableLike.scala:238)
    at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
    at scala.collection.AbstractTraversable.map(Traversable.scala:108)
    at org.apache.spark.sql.execution.datasources.Spark3ParsePartitionUtil.$anonfun$parsePartition$1(Spark3ParsePartitionUtil.scala:65)
    at scala.Option.map(Option.scala:230)
    at org.apache.spark.sql.execution.datasources.Spark3ParsePartitionUtil.parsePartition(Spark3ParsePartitionUtil.scala:63)
    at org.apache.hudi.SparkHoodieTableFileIndex.parsePartitionPath(SparkHoodieTableFileIndex.scala:255)
    at org.apache.hudi.SparkHoodieTableFileIndex.parsePartitionColumnValues(SparkHoodieTableFileIndex.scala:239)
    at org.apache.hudi.BaseHoodieTableFileIndex.lambda$getAllQueryPartitionPaths$3(BaseHoodieTableFileIndex.java:184)
    at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
    at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
    at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
    at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
    at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
    at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
    at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:566)
    at org.apache.hudi.BaseHoodieTableFileIndex.getAllQueryPartitionPaths(BaseHoodieTableFileIndex.java:187)
    at org.apache.hudi.BaseHoodieTableFileIndex.loadPartitionPathFiles(BaseHoodieTableFileIndex.java:219)
    at org.apache.hudi.BaseHoodieTableFileIndex.doRefresh(BaseHoodieTableFileIndex.java:264)
    at org.apache.hudi.BaseHoodieTableFileIndex.<init>(BaseHoodieTableFileIndex.java:139)
    at org.apache.hudi.SparkHoodieTableFileIndex.<init>(SparkHoodieTableFileIndex.scala:69)
    at org.apache.hudi.HoodieFileIndex.<init>(HoodieFileIndex.scala:81)
    at org.apache.hudi.BaseFileOnlyViewRelation.<init>(BaseFileOnlyViewRelation.scala:56)
    at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:116)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:353)
    at org.apache.spark.sql.execution.datasources.FindDataSourceTable.$anonfun$readDataSourceTable$1(DataSourceStrategy.scala:261)
    at org.sparkproject.guava.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4792)
    at org.sparkproject.guava.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3599)
    at org.sparkproject.guava.cache.LocalCache$Segment.loadSync(LocalCache.java:2379)
    at org.sparkproject.guava.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2342)
    at org.sparkproject.guava.cache.LocalCache$Segment.get(LocalCache.java:2257) {code}
 

 
[placeholder code|http://dict.youdao.com/search?q=placeholder%20code&keyfrom=chrome.extension]   [详细|http://dict.youdao.com/search?q=placeholder%20code&keyfrom=chrome.extension]X
网络释义
[placeholder code:|http://dict.youdao.com/search?q=placeholder%20code&keyfrom=chrome.extension&le=eng] 占位符代码



--
This message was sent by Atlassian Jira
(v8.20.7#820007)