You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2016/03/01 00:06:18 UTC

[jira] [Updated] (SPARK-13581) LibSVM throws MatchError

     [ https://issues.apache.org/jira/browse/SPARK-13581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen updated SPARK-13581:
------------------------------
    Assignee: Jeff Zhang

I guess the input it's hoping to treat as a vector type is just a double from the SVM input. As to why, I don't know; seems like a legit bug though.

> LibSVM throws MatchError
> ------------------------
>
>                 Key: SPARK-13581
>                 URL: https://issues.apache.org/jira/browse/SPARK-13581
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.0.0
>            Reporter: Jakob Odersky
>            Assignee: Jeff Zhang
>            Priority: Minor
>
> When running an action on a DataFrame obtained by reading from a libsvm file a MatchError is thrown, however doing the same on a cached DataFrame works fine.
> {code}
> val df = sqlContext.read.format("libsvm").load("../data/mllib/sample_libsvm_data.txt") //file is
> df.select(df("features")).show() //MatchError
> df.cache()
> df.select(df("features")).show() //OK
> {code}
> The exception stack trace is the following:
> {code}
> scala.MatchError: 1.0 (of class java.lang.Double)
> [info] 	at org.apache.spark.mllib.linalg.VectorUDT.serialize(Vectors.scala:207)
> [info] 	at org.apache.spark.mllib.linalg.VectorUDT.serialize(Vectors.scala:192)
> [info] 	at org.apache.spark.sql.catalyst.CatalystTypeConverters$UDTConverter.toCatalystImpl(CatalystTypeConverters.scala:142)
> [info] 	at org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
> [info] 	at org.apache.spark.sql.catalyst.CatalystTypeConverters$$anonfun$createToCatalystConverter$2.apply(CatalystTypeConverters.scala:401)
> [info] 	at org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1$$anonfun$apply$2.apply(ExistingRDD.scala:59)
> [info] 	at org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1$$anonfun$apply$2.apply(ExistingRDD.scala:56)
> {code}
> This issue first appeared in commit {{1dac964c1}}, in PR [#9595|https://github.com/apache/spark/pull/9595] fixing SPARK-11622.
> [~jeffzhang], do you have any insight of what could be going on?
> cc [~iyounus]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org