You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "sivabalan narayanan (Jira)" <ji...@apache.org> on 2022/04/22 16:21:00 UTC

[jira] [Commented] (HUDI-3947) spark2 and scala12 bundle fails for quick start w/ 0.11 master

    [ https://issues.apache.org/jira/browse/HUDI-3947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17526533#comment-17526533 ] 

sivabalan narayanan commented on HUDI-3947:
-------------------------------------------

We tried adding hive-common-2.3.9.jar explicitly and it worked. So, we are investigating to understand whats the diff between scala11 and scala12. 

 

 

 

> spark2 and scala12 bundle fails for quick start w/ 0.11 master
> --------------------------------------------------------------
>
>                 Key: HUDI-3947
>                 URL: https://issues.apache.org/jira/browse/HUDI-3947
>             Project: Apache Hudi
>          Issue Type: Bug
>          Components: spark
>            Reporter: sivabalan narayanan
>            Priority: Blocker
>             Fix For: 0.11.0
>
>
>  
> {code:java}
> scala> df.write.format("hudi").
>      |   options(getQuickstartWriteConfigs).
>      |   option(PRECOMBINE_FIELD_OPT_KEY, "ts").
>      |   option(RECORDKEY_FIELD_OPT_KEY, "uuid").
>      |   option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").
>      |   option(TABLE_NAME, tableName).
>      |   mode(Overwrite).
>      |   save(basePath)
> warning: there was one deprecation warning; for details, enable `:setting -deprecation' or `:replay -deprecation'
> java.lang.NoClassDefFoundError: org/apache/hadoop/hive/conf/HiveConf
>   at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:163)
>   at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46)
>   at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
>   at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
>   at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
>   at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:136)
>   at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:160)
>   at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:157)
>   at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:132)
>   at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:83)
>   at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:81)
>   at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:696)
>   at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:80)
>   at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127)
>   at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75)
>   at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:696)
>   at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:310)
>   at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:291)
>   at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:249)
>   ... 68 elided
> Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.conf.HiveConf
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 88 more
> scala>  {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)