You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by "Rico Bergmann (Jira)" <ji...@apache.org> on 2020/12/17 09:33:00 UTC

[jira] [Created] (PHOENIX-6268) NoSuchMethodError when writing from Spark Dataframe to Phoenix with phoenix-spark connector

Rico Bergmann created PHOENIX-6268:
--------------------------------------

             Summary: NoSuchMethodError when writing from Spark Dataframe to Phoenix with phoenix-spark connector
                 Key: PHOENIX-6268
                 URL: https://issues.apache.org/jira/browse/PHOENIX-6268
             Project: Phoenix
          Issue Type: Bug
          Components: spark-connector
    Affects Versions: 5.0.0
            Reporter: Rico Bergmann


I opened a spark shell (including the phoenix-spark jar as --jars argument), loaded a dataframe (df) and wanted to store the dataframe in a phoenix server (backed by HBase).

df.write.format("org.apache.phoenix.spark").mode(org.apache.spark.sql.SaveMode.Overwrite).options(
Map("zkUrl" -> "<zkserver1>:<port>,<zkserver2>:<port>"
 , "table" -> "targetTablename")).save()

 

The table exists within Phoenix.

I get below error in spark-shell. Can you help or fix this?

 

java.lang.NoSuchMethodError: 'scala.collection.mutable.ArrayOps scala.Predef$.refArrayOps(java.lang.Object[])'
 at org.apache.phoenix.spark.DataFrameFunctions.getFieldArray(DataFrameFunctions.scala:76)
 at org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:35)
 at org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:28)
 at org.apache.phoenix.spark.DefaultSource.createRelation(DefaultSource.scala:47)
 at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46)
 at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
 at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
 at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:90)
 at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:175)
 at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:213)
 at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
 at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:210)
 at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:171)
 at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:122)
 at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:121)
 at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:963)
 at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)
 at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
 at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
 at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
 at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
 at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:963)
 at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:415)
 at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:399)
 ... 49 elided



--
This message was sent by Atlassian Jira
(v8.3.4#803005)