You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Jianshi Huang (JIRA)" <ji...@apache.org> on 2015/06/02 03:23:17 UTC

[jira] [Commented] (SPARK-8012) ArrayIndexOutOfBoundsException in SerializationDebugger

    [ https://issues.apache.org/jira/browse/SPARK-8012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14568350#comment-14568350 ] 

Jianshi Huang commented on SPARK-8012:
--------------------------------------

Yeah, it's from pretty big code base. I'm trying to reduce the scope.

BTW, I'm using 1.4.0-rc3.

Jianshi

> ArrayIndexOutOfBoundsException in SerializationDebugger
> -------------------------------------------------------
>
>                 Key: SPARK-8012
>                 URL: https://issues.apache.org/jira/browse/SPARK-8012
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.4.0
>            Reporter: Jianshi Huang
>
> It makes NonSerializable exception less obvious.
> {noformat}
> java.lang.reflect.InvocationTargetException
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at org.apache.spark.serializer.SerializationDebugger$ObjectStreamClassMethods$.getObjFieldValues$extension(SerializationDebugger.scala:248)
>         at org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visitSerializable(SerializationDebugger.scala:158)
>         at org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visit(SerializationDebugger.scala:107)
>         at org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visitSerializable(SerializationDebugger.scala:166)
>         at org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visit(SerializationDebugger.scala:107)
>         at org.apache.spark.serializer.SerializationDebugger$.find(SerializationDebugger.scala:66)
>         at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:41)
>         at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)
>         at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:81)
>         at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:312)
>         at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:305)
>         at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:132)
>         at org.apache.spark.SparkContext.clean(SparkContext.scala:1891)
>         at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1.apply(RDD.scala:683)
>         at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1.apply(RDD.scala:682)
>         at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148)
>         at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:109)
>         at org.apache.spark.rdd.RDD.withScope(RDD.scala:286)
>         at org.apache.spark.rdd.RDD.mapPartitions(RDD.scala:682)
>                 at org.apache.spark.sql.execution.Project.doExecute(basicOperators.scala:40)
>         at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
>         at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
>         at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148)
>         at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
>         at org.apache.spark.sql.sources.DataSourceStrategy$$anonfun$10.apply(DataSourceStrategy.scala:159)
>         at org.apache.spark.sql.sources.DataSourceStrategy$$anonfun$10.apply(DataSourceStrategy.scala:131)
>         at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
>         at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
>         at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>         at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>         at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
>         at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186)
>         at org.apache.spark.sql.sources.DataSourceStrategy$.buildPartitionedTableScan(DataSourceStrategy.scala:131)
>         at org.apache.spark.sql.sources.DataSourceStrategy$.apply(DataSourceStrategy.scala:80)
>         at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
>         at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
>         at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:396)
>         at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:59)
>         at org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54)
>         at org.apache.spark.sql.execution.SparkStrategies$HashJoin$.apply(SparkStrategies.scala:109)
>         at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
>         at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
>         at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:396)
>         at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:59)
>         at org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54)
>         at org.apache.spark.sql.execution.SparkStrategies$BasicOperators$.apply(SparkStrategies.scala:300)
>         at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
>         at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
>         at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:396)
>         at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:59)
>         at org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:913)
>         at org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:911)
>         at org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:917)
>         at org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:917)
>         at org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:920)
>         at org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:920)
>         at org.apache.spark.sql.sources.InsertIntoHadoopFsRelation.run(commands.scala:98)
>         at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
>         at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
>         at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
>         at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
>         at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
>         at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148)
>         at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
>         at org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:920)
>         at org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:920)
>         at org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:338)
>         at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:144)
>         at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:135)
>         at com.paypal.risk.grs.datamart.infra.export.Madmen2Exporter.export(Madmen2Exporter.scala:80)
>         at com.paypal.risk.grs.datamart.infra.export.main.Madmen2Export$.main(Madmen2Export.scala:24)
>         at com.paypal.risk.grs.datamart.infra.export.main.Madmen2Export.main(Madmen2Export.scala)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>                 at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
>         at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
>         at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
>         at java.io.ObjectStreamClass$FieldReflector.getObjFieldValues(ObjectStreamClass.java:2050)
>         at java.io.ObjectStreamClass.getObjFieldValues(ObjectStreamClass.java:1252)
>         ... 85 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org