You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Dongjoon Hyun (JIRA)" <ji...@apache.org> on 2018/12/23 00:21:00 UTC
[jira] [Commented] (SPARK-26421) Spark2.4.0 integration hadoop3.1.1 causes hive sql not to use,just in idea local mode
[ https://issues.apache.org/jira/browse/SPARK-26421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16727737#comment-16727737 ]
Dongjoon Hyun commented on SPARK-26421:
---------------------------------------
Hi, [~thinktothings]. This is a duplication of SPARK-23534.
Apache Spark officially doesn't support Hadoop 3.
> Spark2.4.0 integration hadoop3.1.1 causes hive sql not to use,just in idea local mode
> -------------------------------------------------------------------------------------
>
> Key: SPARK-26421
> URL: https://issues.apache.org/jira/browse/SPARK-26421
> Project: Spark
> Issue Type: Bug
> Components: Deploy
> Affects Versions: 2.4.0
> Environment: ).idea maven project
> ).jdk 1.8.0_191
> ).hadoop 3.1.1
> ).spark 2.4.0
> Reporter: thinktothings
> Priority: Major
>
> ).Spark2.4.0 integration hadoop3.1.1 causes hive sql not to use,just in idea local mode
> ).idea maven project
> ).spark.sql connect hive
> val spark = SparkSession
> .builder()
> .master("local")
> .appName("Spark Hive Example")
> .config("spark.sql.warehouse.dir", warehouseLocation)
> .enableHiveSupport()
> .getOrCreate()
> spark.sql("show databases").show()
>
> ).do this error ,local environment not cluser
> ----------------------
> Exception in thread "main" java.lang.ExceptionInInitializerError
> at org.apache.hadoop.hive.conf.HiveConf.<clinit>(HiveConf.java:105)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at org.apache.spark.util.Utils$.classForName(Utils.scala:238)
> at org.apache.spark.sql.SparkSession$.hiveClassesArePresent(SparkSession.scala:1117)
> at org.apache.spark.sql.SparkSession$Builder.enableHiveSupport(SparkSession.scala:866)
> at com.opensource.bigdata.spark.sql.n_10_spark_hive.n_01_show_database.Run$.main(Run.scala:19)
> at com.opensource.bigdata.spark.sql.n_10_spark_hive.n_01_show_database.Run.main(Run.scala)
> Caused by: java.lang.IllegalArgumentException: Unrecognized Hadoop major version number: 3.1.1
> at org.apache.hadoop.hive.shims.ShimLoader.getMajorVersion(ShimLoader.java:174)
> at org.apache.hadoop.hive.shims.ShimLoader.loadShims(ShimLoader.java:139)
> at org.apache.hadoop.hive.shims.ShimLoader.getHadoopShims(ShimLoader.java:100)
> at org.apache.hadoop.hive.conf.HiveConf$ConfVars.<clinit>(HiveConf.java:368)
> ... 8 more
> Process finished with exit code 1
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org