You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "leesf (Jira)" <ji...@apache.org> on 2019/09/04 03:51:00 UTC

[jira] [Commented] (HUDI-168) GCS: Calling getFileStatus before closing causes FileNotFoundException in cloud storage

    [ https://issues.apache.org/jira/browse/HUDI-168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921931#comment-16921931 ] 

leesf commented on HUDI-168:
----------------------------

Fixed via master: 3d408ee96b7cc8eb15756276dbab031eb549e6e2

> GCS: Calling getFileStatus before closing causes FileNotFoundException in cloud storage
> ---------------------------------------------------------------------------------------
>
>                 Key: HUDI-168
>                 URL: https://issues.apache.org/jira/browse/HUDI-168
>             Project: Apache Hudi (incubating)
>          Issue Type: Bug
>          Components: Common Core
>            Reporter: BALAJI VARADARAJAN
>            Assignee: BALAJI VARADARAJAN
>            Priority: Major
>              Labels: gcs-parity, pull-request-available
>             Fix For: 0.5.0
>
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> Example Stack: 
>  
> Caused by: com.uber.hoodie.exception.HoodieRollbackException: Failed to rollback for commit 20190709234054
>  at com.uber.hoodie.table.HoodieMergeOnReadTable.lambda$rollback$5(HoodieMergeOnReadTable.java:525)
>  at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
>  at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
>  at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
>  at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
>  at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
>  at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
>  at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
>  at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
>  at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
>  at com.uber.hoodie.table.HoodieMergeOnReadTable.rollback(HoodieMergeOnReadTable.java:504)
>  at com.uber.hoodie.table.HoodieMergeOnReadTable.lambda$rollback$328a965c$1(HoodieMergeOnReadTable.java:307)
>  at org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1.apply(JavaPairRDD.scala:1040)
>  at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>  at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>  at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
>  at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
>  at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
>  at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
>  at scala.collection.AbstractIterator.to(Iterator.scala:1336)
>  at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
>  at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)
>  at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
>  at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)
>  at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:944)
>  at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:944)
>  at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2069)
>  at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2069)
>  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>  at org.apache.spark.scheduler.Task.run(Task.scala:109)
>  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:344)
>  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.FileNotFoundException: File not found : gs://XXX-data-warehouse-hudi-ingest-staging/hudi/data/hudi_ingest_raw/order_central_updates_latest/2019/07/08/.034bc4c3-bdba-45fe-857a-37f0c158a87c-0_20190709233145.log.6_1-0-1
>  at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.getFileStatus(GoogleHadoopFileSystemBase.java:1542)
>  at com.uber.hoodie.table.HoodieMergeOnReadTable.lambda$rollback$5(HoodieMergeOnReadTable.java:523)
>  ... 35 more



--
This message was sent by Atlassian Jira
(v8.3.2#803003)