You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@datafu.apache.org by "Russell Jurney (JIRA)" <ji...@apache.org> on 2019/05/28 04:13:00 UTC

[jira] [Comment Edited] (DATAFU-148) Setup Spark sub-project

    [ https://issues.apache.org/jira/browse/DATAFU-148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16849318#comment-16849318 ] 

Russell Jurney edited comment on DATAFU-148 at 5/28/19 4:12 AM:
----------------------------------------------------------------

Why not have an `activate()` or `initialize()` method and then add these methods to the `DataFrame` class? [`pymongo_spark`](https://github.com/mongodb/mongo-hadoop/blob/master/spark/src/main/python/pymongo_spark.py) (part of [mongo-hadoop](https://github.com/mongodb/mongo-hadoop)) does this to add methods like `pyspark.sql.DataFrame.saveToMongoDB` which make the API consistent with PySpark's.

See: https://github.com/mongodb/mongo-hadoop/tree/master/spark/src/main/python#usage

You use it like this:

{{
import pymongo_spark
pymongo_spark.activate()

...

some_rdd.saveToMongoDB('mongodb://localhost:27017/db.output_collection')}}
}}

And internally it looks like this:

{{
def activate():
    """Activate integration between PyMongo and PySpark.
    This function only needs to be called once.
    """
    # Patch methods in rather than extending these classes.  Many RDD methods
    # result in the creation of a new RDD, whose exact type is beyond our
    # control. However, we would still like to be able to call any of our
    # methods on the resulting RDDs.
    pyspark.rdd.RDD.saveToMongoDB = saveToMongoDB
}}


was (Author: russell.jurney):
Why not have an `activate()` or `initialize()` method and then add these methods to the `DataFrame` class? [`pymongo_spark`](https://github.com/mongodb/mongo-hadoop/blob/master/spark/src/main/python/pymongo_spark.py) (part of [mongo-hadoop](https://github.com/mongodb/mongo-hadoop)) does this to add methods like `pyspark.sql.DataFrame.saveToMongoDB` which make the API consistent with PySpark's.

See: https://github.com/mongodb/mongo-hadoop/tree/master/spark/src/main/python#usage

You use it like this:

{{import pymongo_spark
pymongo_spark.activate()

...

some_rdd.saveToMongoDB('mongodb://localhost:27017/db.output_collection')}}

And internally it looks like this:

{{def activate():
    """Activate integration between PyMongo and PySpark.
    This function only needs to be called once.
    """
    # Patch methods in rather than extending these classes.  Many RDD methods
    # result in the creation of a new RDD, whose exact type is beyond our
    # control. However, we would still like to be able to call any of our
    # methods on the resulting RDDs.
    pyspark.rdd.RDD.saveToMongoDB = saveToMongoDB}}

> Setup Spark sub-project
> -----------------------
>
>                 Key: DATAFU-148
>                 URL: https://issues.apache.org/jira/browse/DATAFU-148
>             Project: DataFu
>          Issue Type: New Feature
>            Reporter: Eyal Allweil
>            Assignee: Eyal Allweil
>            Priority: Major
>         Attachments: patch.diff, patch.diff
>
>          Time Spent: 40m
>  Remaining Estimate: 0h
>
> Create a skeleton Spark sub project for Spark code to be contributed to DataFu



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)