You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Gianmarco Donetti (JIRA)" <ji...@apache.org> on 2018/01/13 08:43:00 UTC

[jira] [Comment Edited] (SPARK-23060) RDD's apply function

    [ https://issues.apache.org/jira/browse/SPARK-23060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16325043#comment-16325043 ] 

Gianmarco Donetti edited comment on SPARK-23060 at 1/13/18 8:42 AM:
--------------------------------------------------------------------

The reason of this need of mine, is the absence of a "pipe" function for the RDDs.
Often, I have to handle use cases where I have one RDD, a series of functions that have to be applied on that RDD in sequence, and in particular all those functions have in input an RDD and output another RDD. Then, I would like to avoid this:

{code:none}
final_rdd = func_3( func_2( func_1( initial_rdd)
{code}

and I'd like to be able to this:

{code:none}
final_rdd = initial_rdd.apply(func_1).apply(func_2).apply(func_3)
{code}




was (Author: gianmarco.donetti):
The reason of this need of mine, is the absence of a "pipe" function for the RDDs.
Often, I have to handle use cases where I have one RDD, a series of functions that have to be applied on that RDD in sequence, and in particular all those functions have in input an RDD and output another RDD. Then, I would like to avoid this:

{code:java}
final_rdd = func_3( func_2( func_1( initial_rdd)
{code}

and I'd like to be able to this:

{code:none}
final_rdd = initial_rdd.apply(func_1).apply(func_2).apply(func_3)
{code}



> RDD's apply function
> --------------------
>
>                 Key: SPARK-23060
>                 URL: https://issues.apache.org/jira/browse/SPARK-23060
>             Project: Spark
>          Issue Type: New Feature
>          Components: PySpark
>    Affects Versions: 2.2.1
>            Reporter: Gianmarco Donetti
>            Priority: Minor
>              Labels: features, newbie
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> New function for RDDs -> apply
>         >>> def foo(rdd):
>         ...     return rdd.map(lambda x: x.split('|')).filter(lambda x: x[0] == 'ERROR')
>         >>> rdd = sc.parallelize(['ERROR|10', 'ERROR|12', 'WARNING|10', 'INFO|2'])
>         >>> result = rdd.apply(foo)
>         >>> result.collect()
>         [('ERROR', '10'), ('ERROR', '12')]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org