You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2020/08/20 18:20:20 UTC

[GitHub] [hudi] kpurella opened a new issue #2001: NPE While writing data to same partition on S3

kpurella opened a new issue #2001:
URL: https://github.com/apache/hudi/issues/2001


   **_Tips before filing an issue_**
   
   - Have you gone through our [FAQs](https://cwiki.apache.org/confluence/display/HUDI/FAQ)? yes
   
   - Join the mailing list to engage in conversations and get faster support at dev-subscribe@hudi.apache.org. 
   
   - If you have triaged this as a bug, then file an [issue](https://issues.apache.org/jira/projects/HUDI/issues) directly.
   
   **Describe the problem you faced**
   
   java.lang.NullPointerException While Writing data to the same hudi partition again and again.
   
   **To Reproduce**
   
   Steps to reproduce the behavior:
   
   in my case, I am reading data from Kafka and trying to write to HUDI with custom code( Not using Deltastreamer here).  my job batch duration is 10 sec and my S3 partition is hourly so for every 10 sec my job try to write the data to same hour hudi partition, during this time HUDI is throwing NULL pointer Exception
   
   **Expected behavior**
   
   Hudi should commit the data without any Exceptions.
   
   **Environment Description**
   
   
   * Hudi version : 0.5.2-incubating
   
   * Spark version : 2.4.5
   
   * Hive version : 2.3.6
   
   * Hadoop version : 2.8.5
   
   * Storage (HDFS/S3/GCS..) : s3
   
   * Running on Docker? (yes/no) : no
   * EMR : 5.30.1
   
   
   Code snippet
                   datFrame.write()
                   .format("org.apache.hudi")
                   .mode(SaveMode.Append)
                   .option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY(), recordKey)
                   .option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY(), partitionKey)
                   .option(DataSourceWriteOptions.OPERATION_OPT_KEY(), operation)
                   .option(HoodieWriteConfig.TABLE_NAME, tableName)
                   .option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY(), tableType)
                   .option(DataSourceWriteOptions.HIVE_SYNC_ENABLED_OPT_KEY(), hiveSyncEnabled)
                   .option(DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY(), keyGeneratorClass)
                   .option("hoodie.parquet.compression.codec", compressionCodec)
                   .option("hoodie.consistency.check.enabled", consistencyCheckEnabled)
                   .option("hoodie.compact.inline.max.delta.commits", inlineMaxDeltaCommits)
                   .option("hoodie.compact.inline", compactInlines)
                       .option("hoodie.insert.shuffle.parallelism", parallelism)
                       .option("hoodie.upsert.shuffle.parallelism", parallelism)
                       .option(DataSourceWriteOptions.HIVE_DATABASE_OPT_KEY(), hiveDBName)
                       .option(DataSourceWriteOptions.HIVE_TABLE_OPT_KEY(), hiveTableName)
                       .option(DataSourceWriteOptions.HIVE_PARTITION_FIELDS_OPT_KEY(), hivePartitions)
                       .option(DataSourceWriteOptions.HIVE_URL_OPT_KEY(), hiveUrl)
                      .save(HUDI_BASE_PATH_PROP);
   
   
   **Stacktrace**
   INFO DAGScheduler: ResultStage 9 (collect at HoodieBloomIndex.java:205) failed in 1.452 s due to Job aborted due to stage failure: Task 0 in stage 9.0 failed 4 times, most recent failure: Lost task 0.3 in stage 9.0 (TID 10, ip-10-223-69-15.emr.awsw.cld.ds.dtveng.net, executor 1): java.lang.NullPointerException
           at java.util.ArrayList.addAll(ArrayList.java:583)
           at org.apache.hudi.common.table.view.HoodieTableFileSystemView.fetchAllStoredFileGroups(HoodieTableFileSystemView.java:170)
           at org.apache.hudi.common.table.view.AbstractTableFileSystemView.getLatestBaseFilesBeforeOrOn(AbstractTableFileSystemView.java:355)
           at org.apache.hudi.index.bloom.HoodieBloomIndex.lambda$loadInvolvedFiles$19c2c1bb$1(HoodieBloomIndex.java:201)
           at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:125)
           at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:125)
           at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
           at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
           at scala.collection.Iterator$class.foreach(Iterator.scala:891)
           at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
           at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
           at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
           at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
           at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
           at scala.collection.AbstractIterator.to(Iterator.scala:1334)
           at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
           at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1334)
           at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
           at scala.collection.AbstractIterator.toArray(Iterator.scala:1334)
           at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$15.apply(RDD.scala:990)
           at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$15.apply(RDD.scala:990)
           at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
           at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
           at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
           at org.apache.spark.scheduler.Task.run(Task.scala:123)
           at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
           at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1405)
           at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
           at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:748)
    
   Driver stacktrace:
   20/08/20 12:37:51 INFO DAGScheduler: Job 4 failed: collect at HoodieBloomIndex.java:205, took 1.455517 s
   Exception in thread "main" org.apache.hudi.exception.HoodieUpsertException: Failed to upsert for commit time 20200820123744
           at org.apache.hudi.client.HoodieWriteClient.upsert(HoodieWriteClient.java:192)
           at org.apache.hudi.DataSourceUtils.doWriteOperation(DataSourceUtils.java:208)
           at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:147)
           at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:108)
           at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
           at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
           at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
           at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
           at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:173)
           at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:169)
           at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:197)
           at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
           at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:194)
           at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:169)
           at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:114)
           at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:112)
           at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
           at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] kpurella commented on issue #2001: NPE While writing data to same partition on S3

Posted by GitBox <gi...@apache.org>.
kpurella commented on issue #2001:
URL: https://github.com/apache/hudi/issues/2001#issuecomment-680144616


   Resolved after addressing partitionpath issue.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] kpurella commented on issue #2001: NPE While writing data to same partition on S3

Posted by GitBox <gi...@apache.org>.
kpurella commented on issue #2001:
URL: https://github.com/apache/hudi/issues/2001#issuecomment-680144291


   @bvaradar  Thank you for your response. I was able to resolve this issue.
   I am building invalid partitionpath which is causing the issue. - Thank you.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] bvaradar commented on issue #2001: NPE While writing data to same partition on S3

Posted by GitBox <gi...@apache.org>.
bvaradar commented on issue #2001:
URL: https://github.com/apache/hudi/issues/2001#issuecomment-679264433


   Can you turn on INFO level logs and attach the logs to debug this ?
   
   Thanks,
   Balaji.V


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] kpurella closed issue #2001: NPE While writing data to same partition on S3

Posted by GitBox <gi...@apache.org>.
kpurella closed issue #2001:
URL: https://github.com/apache/hudi/issues/2001


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org