You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@phoenix.apache.org by "Istvan Toth (Jira)" <ji...@apache.org> on 2022/04/08 07:50:00 UTC
[jira] [Commented] (PHOENIX-6268) NoSuchMethodError when writing from Spark Dataframe to Phoenix with phoenix-spark connector
[ https://issues.apache.org/jira/browse/PHOENIX-6268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17519386#comment-17519386 ]
Istvan Toth commented on PHOENIX-6268:
--------------------------------------
Removed the Release note, as it seems that this is not actual bug that we fixed.
> NoSuchMethodError when writing from Spark Dataframe to Phoenix with phoenix-spark connector
> -------------------------------------------------------------------------------------------
>
> Key: PHOENIX-6268
> URL: https://issues.apache.org/jira/browse/PHOENIX-6268
> Project: Phoenix
> Issue Type: Bug
> Components: spark-connector
> Affects Versions: 5.0.0
> Reporter: Rico Bergmann
> Priority: Critical
> Fix For: connectors-6.0.0
>
>
> I opened a spark shell (including the phoenix-spark jar as --jars argument), loaded a dataframe (df) and wanted to store the dataframe in a phoenix server (backed by HBase).
> df.write.format("org.apache.phoenix.spark").mode(org.apache.spark.sql.SaveMode.Overwrite).options(
> Map("zkUrl" -> "<zkserver1>:<port>,<zkserver2>:<port>"
> , "table" -> "targetTablename")).save()
>
> The table exists within Phoenix.
> I get below error in spark-shell. Can you help or fix this?
> spark version is 3.0.1
> java.lang.NoSuchMethodError: 'scala.collection.mutable.ArrayOps scala.Predef$.refArrayOps(java.lang.Object[])'
> at org.apache.phoenix.spark.DataFrameFunctions.getFieldArray(DataFrameFunctions.scala:76)
> at org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:35)
> at org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:28)
> at org.apache.phoenix.spark.DefaultSource.createRelation(DefaultSource.scala:47)
> at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46)
> at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
> at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
> at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:90)
> at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:175)
> at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:213)
> at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:210)
> at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:171)
> at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:122)
> at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:121)
> at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:963)
> at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)
> at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
> at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
> at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
> at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
> at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:963)
> at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:415)
> at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:399)
> ... 49 elided
--
This message was sent by Atlassian Jira
(v8.20.1#820001)