You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2018/11/26 16:28:00 UTC

[jira] [Resolved] (SPARK-26145) Not Able To Read Data From Hive 3.0 Using Spark 2.3

     [ https://issues.apache.org/jira/browse/SPARK-26145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon resolved SPARK-26145.
----------------------------------
    Resolution: Duplicate

> Not Able To Read Data From Hive 3.0 Using Spark 2.3
> ---------------------------------------------------
>
>                 Key: SPARK-26145
>                 URL: https://issues.apache.org/jira/browse/SPARK-26145
>             Project: Spark
>          Issue Type: Bug
>          Components: Java API, SQL
>    Affects Versions: 2.3.0, 2.3.1
>         Environment: Hive Version:- 3.1.0.3.0.0.0-1634
> Hbase Version:- 2.0.0.3.0.0.0-1634
> Spark Version:- 2.3.1.3.0.0.0-1634
>            Reporter: Avasyu Gupta
>            Priority: Major
>         Attachments: Logs.txt
>
>
> Hello Team,
>  
> We are trying to read data from hive tables using Spark SQL but are unable to do so. Following are the steps we are following to achieve the same:-
>  # Created certain tables in Hive 3.1.0 and linked them to the tables of Hbase 2.0.0 using HbaseStorageHandler SerDe.
>  # All the configuration related to hive including spark.sql.warehouse.dir, thrift server uri, zookeeper details etc. are being provided using SparkConf.
>  # We are then creating the Spark Session as *SparkSession spark = SparkSession.builder().config(conf).enableHiveSupport()*
>  *.getOrCreate();*
>  # Then using SQLContext we are trying to read data from the hive table by:- *sqlContext.sql("select * from db_name.table_name").show();*
> At this step we are facing the error as:- *java.lang.ClassNotFoundException Class org.apache.hadoop.hive.hbase.HBaseSerDe not found* (Full logs attached)
> We are including the hive-hbase-handler jar and all the other required jars in our commonLib and specifying the absolute path to our commonLib using the --jars option in our spark-submit, yet we are unable to find a wayout to resolve this error.
> We read in the Spark's official documentation that it is still supporting upto Hive 2.1. So is there another way to connect to Hive 3.0 using Spark 2.3?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org