You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Joseph K. Bradley (JIRA)" <ji...@apache.org> on 2016/01/06 19:48:39 UTC

[jira] [Resolved] (SPARK-11531) PySpark SparseVector: improve error message for bad indices

     [ https://issues.apache.org/jira/browse/SPARK-11531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Joseph K. Bradley resolved SPARK-11531.
---------------------------------------
       Resolution: Fixed
    Fix Version/s: 2.0.0

Issue resolved by pull request 9525
[https://github.com/apache/spark/pull/9525]

> PySpark SparseVector: improve error message for bad indices
> -----------------------------------------------------------
>
>                 Key: SPARK-11531
>                 URL: https://issues.apache.org/jira/browse/SPARK-11531
>             Project: Spark
>          Issue Type: Improvement
>          Components: MLlib, PySpark
>    Affects Versions: 1.5.1
>            Reporter: Urvish Parikh
>            Assignee: Rekha Joshi
>            Priority: Trivial
>             Fix For: 2.0.0
>
>
> Currently when there are duplicate indices in the SparseVector it will return an error message "indices array must be sorted"
> From: https://github.com/apache/spark/blob/master/python/pyspark/mllib/linalg/__init__.py#L531
> {code}
>            for i in xrange(len(self.indices) - 1):
>                 if self.indices[i] >= self.indices[i + 1]:
>                     raise TypeError("indices array must be sorted")
> {code}
> It should match the error message in the Scala version, which is: "Found duplicate indices"
> https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/mllib/linalg/Vectors.scala#L301 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org