You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "Javier Vega (Jira)" <ji...@apache.org> on 2020/01/16 18:17:00 UTC

[jira] [Updated] (HUDI-528) Incremental Pull fails when latest commit is empty

     [ https://issues.apache.org/jira/browse/HUDI-528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Javier Vega updated HUDI-528:
-----------------------------
    Description: 
When trying to create an incremental view of a dataset, an exception is thrown when the latest commit in the time range is empty. In order to determine the schema of the dataset, Hudi will grab the [latest commit file, parse it, and grab the first metadata file path|https://github.com/apache/incubator-hudi/blob/480fc7869d4d69e1219bf278fd9a37f27ac260f6/hudi-spark/src/main/scala/org/apache/hudi/IncrementalRelation.scala#L78-L80]. If the latest commit was empty though, the field which is used to determine file paths (partitionToWriteStats) will be empty causing the following exception:

 

 
{code:java}
java.util.NoSuchElementException
  at java.util.HashMap$HashIterator.nextNode(HashMap.java:1447)
  at java.util.HashMap$ValueIterator.next(HashMap.java:1474)
  at org.apache.hudi.IncrementalRelation.<init>(IncrementalRelation.scala:80)
  at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:65)
  at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:46)
  at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:318)
  at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)

{code}

  was:
When trying to create an incremental view of a dataset, an exception is thrown when the latest commit in the time range is empty. In order to determine the schema of the dataset, Hudi will grab the [latest commit file, parse it, and grab the first metadata file path|[https://github.com/apache/incubator-hudi/blob/480fc7869d4d69e1219bf278fd9a37f27ac260f6/hudi-spark/src/main/scala/org/apache/hudi/IncrementalRelation.scala#L78-L80]]. If the latest commit was empty though, the field which is used to determine file paths (partitionToWriteStats) will be empty causing the following exception:

 

 
{code:java}
java.util.NoSuchElementException
  at java.util.HashMap$HashIterator.nextNode(HashMap.java:1447)
  at java.util.HashMap$ValueIterator.next(HashMap.java:1474)
  at org.apache.hudi.IncrementalRelation.<init>(IncrementalRelation.scala:80)
  at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:65)
  at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:46)
  at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:318)
  at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)

{code}


> Incremental Pull fails when latest commit is empty
> --------------------------------------------------
>
>                 Key: HUDI-528
>                 URL: https://issues.apache.org/jira/browse/HUDI-528
>             Project: Apache Hudi (incubating)
>          Issue Type: Bug
>          Components: Incremental Pull
>            Reporter: Javier Vega
>            Priority: Minor
>
> When trying to create an incremental view of a dataset, an exception is thrown when the latest commit in the time range is empty. In order to determine the schema of the dataset, Hudi will grab the [latest commit file, parse it, and grab the first metadata file path|https://github.com/apache/incubator-hudi/blob/480fc7869d4d69e1219bf278fd9a37f27ac260f6/hudi-spark/src/main/scala/org/apache/hudi/IncrementalRelation.scala#L78-L80]. If the latest commit was empty though, the field which is used to determine file paths (partitionToWriteStats) will be empty causing the following exception:
>  
>  
> {code:java}
> java.util.NoSuchElementException
>   at java.util.HashMap$HashIterator.nextNode(HashMap.java:1447)
>   at java.util.HashMap$ValueIterator.next(HashMap.java:1474)
>   at org.apache.hudi.IncrementalRelation.<init>(IncrementalRelation.scala:80)
>   at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:65)
>   at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:46)
>   at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:318)
>   at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)