You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Xiangrui Meng (JIRA)" <ji...@apache.org> on 2015/04/02 08:25:53 UTC

[jira] [Created] (SPARK-6672) createDataFrame from RDD[Row] with UDTs cannot be saved

Xiangrui Meng created SPARK-6672:
------------------------------------

             Summary: createDataFrame from RDD[Row] with UDTs cannot be saved
                 Key: SPARK-6672
                 URL: https://issues.apache.org/jira/browse/SPARK-6672
             Project: Spark
          Issue Type: Bug
          Components: MLlib, SQL
    Affects Versions: 1.3.0
            Reporter: Xiangrui Meng


Reported by Jaonary (https://www.mail-archive.com/user@spark.apache.org/msg25218.html):

{code}
import org.apache.spark.mllib.linalg._
import org.apache.spark.mllib.regression._
val df0 = sqlContext.createDataFrame(Seq(LabeledPoint(1.0, Vectors.dense(2.0, 3.0))))
df0.save("/tmp/df0") // works
val df1 = sqlContext.createDataFrame(df0.rdd, df0.schema)
df1.save("/tmp/df1") // error
{code}

throws

{code}
15/04/01 23:24:16 INFO DAGScheduler: Job 3 failed: runJob at newParquet.scala:686, took 0.288304 s
org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 3.0 failed 1 times, most recent failure: Lost task 3.0 in stage 3.0 (TID 15, localhost): java.lang.ClassCastException: org.apache.spark.mllib.linalg.DenseVector cannot be cast to org.apache.spark.sql.Row
	at org.apache.spark.sql.parquet.RowWriteSupport.writeValue(ParquetTableSupport.scala:191)
	at org.apache.spark.sql.parquet.RowWriteSupport.writeValue(ParquetTableSupport.scala:182)
	at org.apache.spark.sql.parquet.RowWriteSupport.write(ParquetTableSupport.scala:171)
	at org.apache.spark.sql.parquet.RowWriteSupport.write(ParquetTableSupport.scala:134)
	at parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:120)
	at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:81)
	at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:37)
	at org.apache.spark.sql.parquet.ParquetRelation2.org$apache$spark$sql$parquet$ParquetRelation2$$writeShard$1(newParquet.scala:668)
	at org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$insert$2.apply(newParquet.scala:686)
	at org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$insert$2.apply(newParquet.scala:686)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
	at org.apache.spark.scheduler.Task.run(Task.scala:64)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:212)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:744)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org