You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Xuefu Zhang (JIRA)" <ji...@apache.org> on 2016/03/14 16:03:33 UTC

[jira] [Updated] (HIVE-13276) Hive on Spark doesn't work when spark.master=local

     [ https://issues.apache.org/jira/browse/HIVE-13276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Xuefu Zhang updated HIVE-13276:
-------------------------------
    Assignee:     (was: Xuefu Zhang)

> Hive on Spark doesn't work when spark.master=local
> --------------------------------------------------
>
>                 Key: HIVE-13276
>                 URL: https://issues.apache.org/jira/browse/HIVE-13276
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>    Affects Versions: 2.1.0
>            Reporter: Xuefu Zhang
>
> The following problem occurs with latest Hive master and Spark 1.6.1. I'm using hive CLI on mac.
> {code}
>   set mapreduce.job.reduces=<number>
> java.lang.NoClassDefFoundError: Could not initialize class org.apache.spark.rdd.RDDOperationScope$
> 	at org.apache.spark.SparkContext.withScope(SparkContext.scala:714)
> 	at org.apache.spark.SparkContext.hadoopRDD(SparkContext.scala:991)
> 	at org.apache.spark.api.java.JavaSparkContext.hadoopRDD(JavaSparkContext.scala:419)
> 	at org.apache.hadoop.hive.ql.exec.spark.SparkPlanGenerator.generateMapInput(SparkPlanGenerator.java:205)
> 	at org.apache.hadoop.hive.ql.exec.spark.SparkPlanGenerator.generateParentTran(SparkPlanGenerator.java:145)
> 	at org.apache.hadoop.hive.ql.exec.spark.SparkPlanGenerator.generate(SparkPlanGenerator.java:117)
> 	at org.apache.hadoop.hive.ql.exec.spark.LocalHiveSparkClient.execute(LocalHiveSparkClient.java:130)
> 	at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.submit(SparkSessionImpl.java:71)
> 	at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:94)
> 	at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:156)
> 	at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:101)
> 	at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1837)
> 	at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1578)
> 	at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1351)
> 	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1122)
> 	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1110)
> 	at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
> 	at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
> 	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:400)
> 	at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:778)
> 	at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:717)
> 	at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:645)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:606)
> 	at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> FAILED: Execution Error, return code -101 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Could not initialize class org.apache.spark.rdd.RDDOperationScope$
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)