You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hive.apache.org by yuemeng1 <yu...@huawei.com> on 2014/12/02 04:59:08 UTC

java.lang.NoClassDefFoundError: org/apache/spark/SparkJobInfo

i get a spark-1.1.0-bin-hadoop2.4 
from(http://ec2-50-18-79-139.us-west-1.compute.amazonaws.com/data/) and 
replace the Spark 1.2.x assembly with 
http://ec2-50-18-79-139.us-west-1.compute.amazonaws.com/data/spark-assembly-1.2.0-SNAPSHOT-hadoop2.3.0-cdh5.1.2.jar,but 
when i run a query about join,it give  me follow error:
but i already repalce spark assemnly jar,so why ?





hive> select distinct st.sno,sname from student st join score sc 
on(st.sno=sc.sno) where sc.cno IN(11,12,13) and st.sage > 28;
Query ID = root_20141203035353_f94de037-1769-410e-8e91-19c9bf88e3c0
Total jobs = 2
Launching Job 1 out of 2
In order to change the average load for a reducer (in bytes):
   set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
   set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
   set mapreduce.job.reduces=<number>
java.lang.NoClassDefFoundError: org/apache/spark/SparkJobInfo
         at 
org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.execute(RemoteHiveSparkClient.java:143)
         at 
org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.submit(SparkSessionImpl.java:64)
         at 
org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:103)
         at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
         at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
         at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1645)
         at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1405)
         at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1217)
         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1044)
         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1034)
         at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:201)
         at 
org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:153)
         at 
org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:364)
         at 
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:712)
         at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:631)
         at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:570)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
         at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
         at java.lang.reflect.Method.invoke(Method.java:601)
         at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.SparkJobInfo
         at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
         at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
         at java.security.AccessController.doPrivileged(Native Method)
         at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
         at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
         at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
         at java.lang.ClassLoader.loadClass(ClassLoader.java:356)

Re: java.lang.NoClassDefFoundError: org/apache/spark/SparkJobInfo

Posted by Xuefu Zhang <xz...@cloudera.com>.
I think your spark cluster needs to be a build from latest Spark-1.2
branch. You need to build it yourself.

--Xuefu

On Mon, Dec 1, 2014 at 7:59 PM, yuemeng1 <yu...@huawei.com> wrote:

>  i get a spark-1.1.0-bin-hadoop2.4 from(
> http://ec2-50-18-79-139.us-west-1.compute.amazonaws.com/data/) and
> replace the Spark 1.2.x assembly with
> http://ec2-50-18-79-139.us-west-1.compute.amazonaws.com/data/spark-assembly-1.2.0-SNAPSHOT-hadoop2.3.0-cdh5.1.2.jar,but
> when i run a query about join,it give  me follow error:
> but i already repalce spark assemnly jar,so why ?
>
>
>
>
>
> hive> select distinct st.sno,sname from student st join score sc
> on(st.sno=sc.sno) where sc.cno IN(11,12,13) and st.sage > 28;
> Query ID = root_20141203035353_f94de037-1769-410e-8e91-19c9bf88e3c0
> Total jobs = 2
> Launching Job 1 out of 2
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapreduce.job.reduces=<number>
> java.lang.NoClassDefFoundError: org/apache/spark/SparkJobInfo
>         at
> org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.execute(RemoteHiveSparkClient.java:143)
>         at
> org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.submit(SparkSessionImpl.java:64)
>         at
> org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:103)
>         at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
>         at
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
>         at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1645)
>         at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1405)
>         at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1217)
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1044)
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1034)
>         at
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:201)
>         at
> org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:153)
>         at
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:364)
>         at
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:712)
>         at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:631)
>         at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:570)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:601)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> Caused by: java.lang.ClassNotFoundException: org.apache.spark.SparkJobInfo
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
>         at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
>