You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2019/05/21 04:23:38 UTC

[jira] [Updated] (SPARK-16680) Set spark.driver.userClassPathFirst=true, and run spark-sql failed

     [ https://issues.apache.org/jira/browse/SPARK-16680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon updated SPARK-16680:
---------------------------------
    Labels: bulk-closed  (was: )

> Set spark.driver.userClassPathFirst=true, and run spark-sql failed
> ------------------------------------------------------------------
>
>                 Key: SPARK-16680
>                 URL: https://issues.apache.org/jira/browse/SPARK-16680
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.1.0
>            Reporter: KaiXinXIaoLei
>            Priority: Major
>              Labels: bulk-closed
>
> There is an exception when I run “sh bin/spark-sql  --conf spark.driver.userClassPathFirst=true” 
> 16/07/22 15:54:54 INFO HiveSharedState: Warehouse path is 'file:/home/hll/code/spark-master-0722/spark-master/spark-warehouse'.
> Exception in thread "main" java.lang.IllegalArgumentException: Unable to locate hive jars to connect to metastore. Please set spark.sql.hive.metastore.jars.
>         at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:299)
>         at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:263)
>         at org.apache.spark.sql.hive.HiveSharedState.metadataHive$lzycompute(HiveSharedState.scala:39)
>         at org.apache.spark.sql.hive.HiveSharedState.metadataHive(HiveSharedState.scala:38)
>         at org.apache.spark.sql.hive.HiveSessionState.metadataHive$lzycompute(HiveSessionState.scala:43)
>         at org.apache.spark.sql.hive.HiveSessionState.metadataHive(HiveSessionState.scala:43)
>         at org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:62)
>         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.<init>(SparkSQLCLIDriver.scala:288)
>         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:137)
>         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:601)
>         at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:729)
>         at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
>         at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org