You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@phoenix.apache.org by "yj (Jira)" <ji...@apache.org> on 2022/09/07 08:29:00 UTC

[jira] [Comment Edited] (PHOENIX-6783) Phoenix5-Spark3 Connector SQL Support

    [ https://issues.apache.org/jira/browse/PHOENIX-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17601198#comment-17601198 ] 

yj edited comment on PHOENIX-6783 at 9/7/22 8:28 AM:
-----------------------------------------------------

[~stoty] I'm so sorry to keep asking you questions,,

 

So I did put
 * All the Jar files that can be checked through the <hbase mapredcp> command   ---------> */hbase/jars directory*
 * All hbase configuration files ------------> */hbase/conf directory*
 * phoenix5-spark3-shaded-6.0.0-SNAPSHOT jar file -----------> *root directory*

 

And I started my application using below command:

./spark-shell 

--conf spark.driver.extraClassPath=/hbase/conf:/hbase/jars/audience-annotations-0.5.0.jar:/hbase/jars/findbugs-annotations-1.3.9-1.jar:/hbase/jars/htrace-core4-4.2.0-incubating.jar:/hbase/jars/slf4j-api-1.7.30.jar:/hbase/jars/commons-logging-1.2.jar:/hbase/jars/hbase-shaded-mapreduce-2.2.3.7.1.7.0-551.jar:/hbase/jars/log4j-1.2.17-cloudera1.jar:/phoenix5-spark3-shaded-6.0.0-SNAPSHOT.jar 

{{--conf spark.executor.extraClassPath=/hbase/conf:/hbase/jars/audience-annotations-0.5.0.jar:/hbase/jars/findbugs-annotations-1.3.9-1.jar:/hbase/jars/htrace-core4-4.2.0-incubating.jar:/hbase/jars/slf4j-api-1.7.30.jar:/hbase/jars/commons-logging-1.2.jar:/hbase/jars/hbase-shaded-mapreduce-2.2.3.7.1.7.0-551.jar:/hbase/jars/log4j-1.2.17-cloudera1.jar:/phoenix5-spark3-shaded-6.0.0-SNAPSHOT.jar}}

 
Do I understand this correctly?
 


was (Author: JIRAUSER295267):
[~stoty] I'm so sorry to keep asking you questions,,

 

So I did put
 * All the Jar files that can be checked through the <hbase mapredcp> command   ---------> */hbase/jars directory*
 * All hbase configuration files ------------> */hbase/conf directory*
 * phoenix5-spark3-shaded-6.0.0-SNAPSHOT jar file -----------> *root directory*

 

And I started my application using below command:

{{./spark-shell \}}


{{--conf spark.driver.extraClassPath=/hbase/conf:/hbase/jars/audience-annotations-0.5.0.jar:/hbase/jars/findbugs-annotations-1.3.9-1.jar:/hbase/jars/htrace-core4-4.2.0-incubating.jar:/hbase/jars/slf4j-api-1.7.30.jar:/hbase/jars/commons-logging-1.2.jar:/hbase/jars/hbase-shaded-mapreduce-2.2.3.7.1.7.0-551.jar:/hbase/jars/log4j-1.2.17-cloudera1.jar:/phoenix5-spark3-shaded-6.0.0-SNAPSHOT.jar \}}


{{--conf spark.executor.extraClassPath=/hbase/conf:/hbase/jars/audience-annotations-0.5.0.jar:/hbase/jars/findbugs-annotations-1.3.9-1.jar:/hbase/jars/htrace-core4-4.2.0-incubating.jar:/hbase/jars/slf4j-api-1.7.30.jar:/hbase/jars/commons-logging-1.2.jar:/hbase/jars/hbase-shaded-mapreduce-2.2.3.7.1.7.0-551.jar:/hbase/jars/log4j-1.2.17-cloudera1.jar:/phoenix5-spark3-shaded-6.0.0-SNAPSHOT.jar}}

 
Do I understand this correctly?
 

> Phoenix5-Spark3 Connector SQL Support
> -------------------------------------
>
>                 Key: PHOENIX-6783
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-6783
>             Project: Phoenix
>          Issue Type: Bug
>          Components: spark-connector
>         Environment: # Pure Spark 3.0.3
>  # phoenix5-spark3-shaded-6.0.0-SNAPSHOT.jar located in $SPARK_HOME/jars
>  # SQL used to create Spark Phoenix data source table is:
> {{create table Table1 (ID long, COL1 string) using  phoenix options (table 'Table1', zkUrl '192.168.0.103:2181', primary 'ID')}}
>            Reporter: yj
>            Priority: Major
>
> While conducting the Spark 3.0 Integration test using *Phoenix5-spark3-shaded module of project [Phoenix-Connectors|https://github.com/apache/phoenix-connectors]* the following questions occurred.
>  
> The test process is as follows:
> 1. Phoenix created a table called "TABLE1" and then loaded the data into that table.
> 2. After that, I tried to select the data of the table using Spark.
>  
> As written in the README of Phoenix5-spark3 module, the part that loads Phoenix data into the Spark DataFrame using the Phoenix data source seems to work well.
> However, when I *create Phoenix Spark data source table through Spark SQL* and try to select that data source table, the following error occurs:
> {{java.util.concurrent.ExecutionException: org.apache.spark.sql.AnalysisException: phoenix is not a valid Spark SQL Data Source}}
>  
> I wonder if the Phoenix-Connectors does not support creating Phoenix Spark data source tables, or if there is any other reason for this error.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)