You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by MechCoder <gi...@git.apache.org> on 2016/07/29 22:41:30 UTC

[GitHub] spark pull request #13571: [SPARK-15369][WIP][RFC][PySpark][SQL] Expose pote...

Github user MechCoder commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13571#discussion_r72870886
  
    --- Diff: python/pyspark/sql/functions.py ---
    @@ -1731,13 +1749,115 @@ def sort_array(col, asc=True):
     
     # ---------------------------- User Defined Function ----------------------------------
     
    +def _wrap_jython_func(sc, src, ser_vars, ser_imports, setup_code, returnType):
    +    return sc._jvm.org.apache.spark.sql.execution.python.JythonFunction(
    +        src, ser_vars, ser_imports, setup_code, sc._jsc.sc())
    +
    +
     def _wrap_function(sc, func, returnType):
         command = (func, returnType)
         pickled_command, broadcast_vars, env, includes = _prepare_for_python_RDD(sc, command)
         return sc._jvm.PythonFunction(bytearray(pickled_command), env, includes, sc.pythonExec,
                                       sc.pythonVer, broadcast_vars, sc._javaAccumulator)
     
     
    +class UserDefinedJythonFunction(object):
    +    """
    +    User defined function in Jython - note this might be a bad idea to use.
    +
    +    .. versionadded:: 2.0
    +    .. Note: Experimental
    +    """
    +    def __init__(self, func, returnType, name=None, setupCode=""):
    +        self.func = func
    +        self.returnType = returnType
    +        self.setupCode = setupCode
    +        self._judf = self._create_judf(name)
    +
    +    def _create_judf(self, name):
    +        func = self.func
    +        from pyspark.sql import SQLContext
    +        sc = SparkContext.getOrCreate()
    +        # Empty strings allow the Scala code to recognize no data and skip adding the Jython
    +        # code to handle vars or imports if not needed.
    +        serialized_vars = ""
    +        serialized_imports = ""
    +        if isinstance(func, basestring):
    +            src = func
    +        else:
    +            try:
    +                import dill
    --- End diff --
    
    Currently it seems pyspark uses cloudpickle to serialize and deserialize otherwise non-serializable functions. What are the advantages of using dill here instead of cloudpickle?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org