You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Michael Yannakopoulos (JIRA)" <ji...@apache.org> on 2014/08/10 01:39:11 UTC
[jira] [Commented] (SPARK-2871) Missing API in PySpark
[ https://issues.apache.org/jira/browse/SPARK-2871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14091940#comment-14091940 ]
Michael Yannakopoulos commented on SPARK-2871:
----------------------------------------------
I can help with this issue. My understanding is that you would like to incorporate the above scala functionality in python. Is that correct?
> Missing API in PySpark
> ----------------------
>
> Key: SPARK-2871
> URL: https://issues.apache.org/jira/browse/SPARK-2871
> Project: Spark
> Issue Type: Improvement
> Reporter: Davies Liu
>
> There are several APIs missing in PySpark:
> RDD.collectPartitions()
> RDD.histogram()
> RDD.zipWithIndex()
> RDD.zipWithUniqueId()
> RDD.min(comp)
> RDD.max(comp)
> A bunch of API related to approximate jobs.
--
This message was sent by Atlassian JIRA
(v6.2#6252)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org