You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Yin Huai (JIRA)" <ji...@apache.org> on 2015/05/21 18:59:17 UTC

[jira] [Resolved] (SPARK-7565) Broken maps in jsonRDD

     [ https://issues.apache.org/jira/browse/SPARK-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Yin Huai resolved SPARK-7565.
-----------------------------
       Resolution: Fixed
    Fix Version/s: 1.4.0

Issue resolved by pull request 6299
[https://github.com/apache/spark/pull/6299]

> Broken maps in jsonRDD
> ----------------------
>
>                 Key: SPARK-7565
>                 URL: https://issues.apache.org/jira/browse/SPARK-7565
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.4.0
>            Reporter: Paul Colomiets
>            Assignee: Davies Liu
>            Priority: Blocker
>             Fix For: 1.4.0
>
>
> When I use the following JSON:
> {code}
> {"obj": {"a": "hello"}}
> {code}
> And load it with the following python code:
> {code}
> tf = sc.textFile('test.json')
> v = sqlContext.jsonRDD(tf, StructType([StructField("obj", MapType(StringType(), StringType()), True)]))
> v.save('test.parquet', mode='overwrite')
> {code}
> I get the following error in spark master branch:
> {code}
> Py4JJavaError: An error occurred while calling o78.save.
> : org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 5.0 failed 1 times, most recent failure: Lost task 1.0 in stage 5.0 (TID 11, localhost): java.lang.ClassCastException: java.lang.String cannot be cast to org.apache.spark.sql.types.UTF8String
>         at org.apache.spark.sql.parquet.RowWriteSupport.writePrimitive(ParquetTableSupport.scala:201)
>         at org.apache.spark.sql.parquet.RowWriteSupport.writeValue(ParquetTableSupport.scala:192)
>         at org.apache.spark.sql.parquet.RowWriteSupport$$anonfun$writeMap$2.apply(ParquetTableSupport.scala:284)
>         at org.apache.spark.sql.parquet.RowWriteSupport$$anonfun$writeMap$2.apply(ParquetTableSupport.scala:281)
>         at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
>         at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
>         at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
>         at org.apache.spark.sql.parquet.RowWriteSupport.writeMap(ParquetTableSupport.scala:281)
>         at org.apache.spark.sql.parquet.RowWriteSupport.writeValue(ParquetTableSupport.scala:186)
>         at org.apache.spark.sql.parquet.RowWriteSupport.write(ParquetTableSupport.scala:171)
>         at org.apache.spark.sql.parquet.RowWriteSupport.write(ParquetTableSupport.scala:134)
>         at parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:120)
>         at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:81)
>         at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:37)
>         at org.apache.spark.sql.parquet.ParquetRelation2.org$apache$spark$sql$parquet$ParquetRelation2$$writeShard$1(newParquet.scala:699)
>         at org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$insert$2.apply(newParquet.scala:717)
>         at org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$insert$2.apply(newParquet.scala:717)
>         at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
>         at org.apache.spark.scheduler.Task.run(Task.scala:70)
>         at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
> {code}
> This worked well in spark 1.3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org