You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Xiangrui Meng (JIRA)" <ji...@apache.org> on 2015/07/17 10:13:05 UTC

[jira] [Commented] (SPARK-6486) Add BlockMatrix in PySpark

    [ https://issues.apache.org/jira/browse/SPARK-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14630964#comment-14630964 ] 

Xiangrui Meng commented on SPARK-6486:
--------------------------------------

[~MechCoder] I think you can start this task without implementing the base DistributedMatrix class and conversions from/to other distributed matrices. Does it sound good to you? Please let me know if you see blockers ahead.

> Add BlockMatrix in PySpark
> --------------------------
>
>                 Key: SPARK-6486
>                 URL: https://issues.apache.org/jira/browse/SPARK-6486
>             Project: Spark
>          Issue Type: Sub-task
>          Components: MLlib, PySpark
>            Reporter: Xiangrui Meng
>
> We should add BlockMatrix to PySpark. Internally, we can use DataFrames and MatrixUDT for serialization. This JIRA should contain conversions between IndexedRowMatrix/CoordinateMatrix to block matrices. But this does NOT cover linear algebra operations of block matrices.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org