You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "sivabalan narayanan (Jira)" <ji...@apache.org> on 2022/03/01 23:11:00 UTC

[jira] [Assigned] (HUDI-3546) Quick start guide fails w/ latest master

     [ https://issues.apache.org/jira/browse/HUDI-3546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

sivabalan narayanan reassigned HUDI-3546:
-----------------------------------------

    Assignee: Raymond Xu

> Quick start guide fails w/ latest master
> ----------------------------------------
>
>                 Key: HUDI-3546
>                 URL: https://issues.apache.org/jira/browse/HUDI-3546
>             Project: Apache Hudi
>          Issue Type: Bug
>          Components: spark
>            Reporter: sivabalan narayanan
>            Assignee: Raymond Xu
>            Priority: Major
>
> tried to go through quick start guide and ran into this issue. I built hudi as usual
>  
> mvn package -DskipTests
>  
> {code:java}
> scala> df.write.format("hudi").
>      |   options(getQuickstartWriteConfigs).
>      |   option(PRECOMBINE_FIELD_OPT_KEY, "ts").
>      |   option(RECORDKEY_FIELD_OPT_KEY, "uuid").
>      |   option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").
>      |   option(TABLE_NAME, tableName).
>      |   mode(Overwrite).
>      |   save(basePath)
> warning: there was one deprecation warning; re-run with -deprecation for details
> 22/03/01 18:08:37 WARN DFSPropertiesConfiguration: Cannot find HUDI_CONF_DIR, please set it as the dir of hudi-defaults.conf
> 22/03/01 18:08:37 WARN DFSPropertiesConfiguration: Properties file file:/etc/hudi/conf/hudi-defaults.conf not found. Ignoring to load props file
> java.lang.NoSuchMethodError: scala.Some.value()Ljava/lang/Object;
>   at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:103)
>   at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:162)
>   at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
>   at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
>   at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
>   at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
>   at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
>   at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
>   at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
>   at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
>   at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
>   at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:83)
>   at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:81)
>   at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:696)
>   at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:696)
>   at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:80)
>   at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127)
>   at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75)
>   at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:696)
>   at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:305)
>   at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:291)
>   at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:249)
>   ... 68 elided {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)