You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "Balaji Varadarajan (Jira)" <ji...@apache.org> on 2019/10/16 09:24:00 UTC

[jira] [Resolved] (HUDI-301) Failed to update a non-partition MOR table

     [ https://issues.apache.org/jira/browse/HUDI-301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Balaji Varadarajan resolved HUDI-301.
-------------------------------------
    Resolution: Fixed

> Failed to update a non-partition MOR table
> ------------------------------------------
>
>                 Key: HUDI-301
>                 URL: https://issues.apache.org/jira/browse/HUDI-301
>             Project: Apache Hudi (incubating)
>          Issue Type: Bug
>          Components: Common Core
>            Reporter: Wenning Ding
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 0.5.0
>
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> We met this exception when trying to update a field for a non-partition MOR table.
> {code:java}
> org.apache.hudi.exception.HoodieUpsertException: Error upserting bucketType UPDATE for partition :0
> 	at org.apache.hudi.table.HoodieCopyOnWriteTable.handleUpsertPartition(HoodieCopyOnWriteTable.java:273)
> 	at org.apache.hudi.HoodieWriteClient.lambda$upsertRecordsInternal$507693af$1(HoodieWriteClient.java:457)
> 	at org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:102)
> 	at org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:102)
> 	at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$25.apply(RDD.scala:853)
> 	at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$25.apply(RDD.scala:853)
> 	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
> 	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
> 	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
> 	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
> 	at org.apache.spark.rdd.RDD$$anonfun$7.apply(RDD.scala:337)
> 	at org.apache.spark.rdd.RDD$$anonfun$7.apply(RDD.scala:335)
> 	at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1182)
> 	at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1156)
> 	at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1091)
> 	at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1156)
> 	at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:882)
> 	at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:335)
> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:286)
> 	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
> 	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:123)
> 	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
> 	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 	at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.IllegalArgumentException: Can not create a Path from an empty string
> 	at org.apache.hadoop.fs.Path.checkPathArg(Path.java:163)
> 	at org.apache.hadoop.fs.Path.<init>(Path.java:175)
> 	at org.apache.hadoop.fs.Path.<init>(Path.java:110)
> 	at org.apache.hudi.io.HoodieAppendHandle.init(HoodieAppendHandle.java:145)
> 	at org.apache.hudi.io.HoodieAppendHandle.doAppend(HoodieAppendHandle.java:194)
> 	at org.apache.hudi.table.HoodieMergeOnReadTable.handleUpdate(HoodieMergeOnReadTable.java:116)
> 	at org.apache.hudi.table.HoodieCopyOnWriteTable.handleUpsertPartition(HoodieCopyOnWriteTable.java:265)
> 	... 30 more
> {code}
> I have created a PR to solve this issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)