You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Reynold Xin (JIRA)" <ji...@apache.org> on 2015/01/10 05:39:34 UTC

[jira] [Resolved] (SPARK-5141) CaseInsensitiveMap throws "java.io.NotSerializableException"

     [ https://issues.apache.org/jira/browse/SPARK-5141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Reynold Xin resolved SPARK-5141.
--------------------------------
       Resolution: Fixed
    Fix Version/s: 1.2.1
                   1.3.0
         Assignee: Gankun Luo

> CaseInsensitiveMap throws "java.io.NotSerializableException"
> ------------------------------------------------------------
>
>                 Key: SPARK-5141
>                 URL: https://issues.apache.org/jira/browse/SPARK-5141
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>            Reporter: Gankun Luo
>            Assignee: Gankun Luo
>            Priority: Minor
>             Fix For: 1.3.0, 1.2.1
>
>
> The following code throws a serialization.[https://github.com/luogankun/spark-jdbc|https://github.com/luogankun/spark-jdbc]
> {code}
> CREATE TEMPORARY TABLE jdbc_table
> USING com.luogankun.spark.jdbc
> OPTIONS (
> sparksql_table_schema  '(TBL_ID int, TBL_NAME string, TBL_TYPE string)',
> jdbc_table_name    'TBLS',
> jdbc_table_schema '(TBL_ID , TBL_NAME , TBL_TYPE)',
> url    'jdbc:mysql://hadoop000:3306/hive',
> user    'root',
> password    'root'
> );
> select TBL_ID,TBL_ID,TBL_TYPE from jdbc_table;
> {code}
> I get the following stack trace:
> {code}
> org.apache.spark.SparkException: Task not serializable
>         at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:166)
>         at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:158)
>         at org.apache.spark.SparkContext.clean(SparkContext.scala:1448)
>         at org.apache.spark.rdd.RDD.mapPartitions(RDD.scala:616)
>         at org.apache.spark.sql.execution.Project.execute(basicOperators.scala:43)
>         at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:81)
>         at org.apache.spark.sql.hive.HiveContext$QueryExecution.stringResult(HiveContext.scala:386)
>         at org.apache.spark.sql.hive.thriftserver.AbstractSparkSQLDriver.run(AbstractSparkSQLDriver.scala:57)
>         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:275)
>         at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423)
>         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:211)
>         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:365)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: java.io.NotSerializableException: org.apache.spark.sql.sources.CaseInsensitiveMap
>         at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1183)
>         at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1547)
> 		......
> 		at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:42)
>         at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:73)
>         at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:164)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org