You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "tone (JIRA)" <ji...@apache.org> on 2016/09/01 11:50:20 UTC
[jira] [Commented] (SPARK-8368) ClassNotFoundException in closure
for map
[ https://issues.apache.org/jira/browse/SPARK-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455154#comment-15455154 ]
tone commented on SPARK-8368:
-----------------------------
The comment has been inserted automatically. The PR is NOT related with the issue [SPARK-8368].
Please ignore the comment here.
Sorry for the inconvenience.
Thanks!
> ClassNotFoundException in closure for map
> ------------------------------------------
>
> Key: SPARK-8368
> URL: https://issues.apache.org/jira/browse/SPARK-8368
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 1.4.0
> Environment: Centos 6.5, java 1.7.0_67, scala 2.10.4. Build the project on Windows 7 and run in a spark standalone cluster(or local) mode on Centos 6.X.
> Reporter: CHEN Zhiwei
> Assignee: Yin Huai
> Priority: Blocker
> Fix For: 1.4.1, 1.5.0
>
>
> After upgraded the cluster from spark 1.3.0 to 1.4.0(rc4), I encountered the following exception:
> ======begin exception========
> {quote}
> Exception in thread "main" java.lang.ClassNotFoundException: com.yhd.ycache.magic.Model$$anonfun$9$$anonfun$10
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:278)
> at org.apache.spark.util.InnerClosureFinder$$anon$4.visitMethodInsn(ClosureCleaner.scala:455)
> at com.esotericsoftware.reflectasm.shaded.org.objectweb.asm.ClassReader.accept(Unknown Source)
> at com.esotericsoftware.reflectasm.shaded.org.objectweb.asm.ClassReader.accept(Unknown Source)
> at org.apache.spark.util.ClosureCleaner$.getInnerClosureClasses(ClosureCleaner.scala:101)
> at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:197)
> at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:132)
> at org.apache.spark.SparkContext.clean(SparkContext.scala:1891)
> at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:294)
> at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:293)
> at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148)
> at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:109)
> at org.apache.spark.rdd.RDD.withScope(RDD.scala:286)
> at org.apache.spark.rdd.RDD.map(RDD.scala:293)
> at org.apache.spark.sql.DataFrame.map(DataFrame.scala:1210)
> at com.yhd.ycache.magic.Model$.main(SSExample.scala:239)
> at com.yhd.ycache.magic.Model.main(SSExample.scala)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
> at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> {quote}
> ===============end exception===========
> I simplify the code that cause this issue, as following:
> ==========begin code==================
> {noformat}
> object Model extends Serializable{
> def main(args: Array[String]) {
> val Array(sql) = args
> val sparkConf = new SparkConf().setAppName("Mode Example")
> val sc = new SparkContext(sparkConf)
> val hive = new HiveContext(sc)
> //get data by hive sql
> val rows = hive.sql(sql)
> val data = rows.map(r => {
> val arr = r.toSeq.toArray
> val label = 1.0
> def fmap = ( input: Any ) => 1.0
> val feature = arr.map(_=>1.0)
> LabeledPoint(label, Vectors.dense(feature))
> })
> data.count()
> }
> }
> {noformat}
> =====end code===========
> This code can run pretty well on spark-shell, but error when submit it to spark cluster (standalone or local mode). I try the same code on spark 1.3.0(local mode), and no exception is encountered.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org