You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2021/03/05 03:04:03 UTC

[GitHub] [spark] HyukjinKwon commented on a change in pull request #31735: [WIP][SPARK-34600][Pyspark][SQL] Return User-defined types from Pandas UDF

HyukjinKwon commented on a change in pull request #31735:
URL: https://github.com/apache/spark/pull/31735#discussion_r587996946



##########
File path: python/pyspark/sql/pandas/serializers.py
##########
@@ -183,6 +212,21 @@ def create_array(s, t):
                     raise e
             return array
 
+        def to_plain_struct(cell):

Review comment:
       The performance here will be very bad. We should create a function based on the type, and avoid type-dispatching for every value.

##########
File path: sql/core/src/main/scala/org/apache/spark/sql/execution/python/ArrowEvalPythonExec.scala
##########
@@ -89,9 +90,35 @@ case class ArrowEvalPythonExec(udfs: Seq[PythonUDF], resultAttrs: Seq[Attribute]
 
     columnarBatchIter.flatMap { batch =>
       val actualDataTypes = (0 until batch.numCols()).map(i => batch.column(i).dataType())
-      assert(outputTypes == actualDataTypes, "Invalid schema from pandas_udf: " +
-        s"expected ${outputTypes.mkString(", ")}, got ${actualDataTypes.mkString(", ")}")
+      assert(plainSchema(outputTypes) == plainSchema(actualDataTypes),

Review comment:
       I think we wouldn't need to call `plainSchema(actualDataTypes)` because Arrow schema cannot contain PySpark's UDF?

##########
File path: sql/core/src/main/scala/org/apache/spark/sql/execution/python/ArrowPythonRunner.scala
##########
@@ -54,6 +54,9 @@ class ArrowPythonRunner(
     "Pandas execution requires more than 4 bytes. Please set higher buffer. " +
       s"Please change '${SQLConf.PANDAS_UDF_BUFFER_SIZE.key}'.")
 
+  /** This is a private key */
+  private val PANDAS_UDF_RETURN_TYPE_JSON = "spark.sql.execution.pandas.udf.return.type.json"

Review comment:
       Can we avoid sending this together with configurations? 

##########
File path: sql/core/src/main/scala/org/apache/spark/sql/execution/python/ArrowEvalPythonExec.scala
##########
@@ -24,9 +24,10 @@ import org.apache.spark.api.python.ChainedPythonFunctions
 import org.apache.spark.sql.catalyst.InternalRow
 import org.apache.spark.sql.catalyst.expressions._
 import org.apache.spark.sql.execution.SparkPlan
-import org.apache.spark.sql.types.StructType
+import org.apache.spark.sql.types._
 import org.apache.spark.sql.util.ArrowUtils
 
+

Review comment:
       Let's remove all these unreleased changes

##########
File path: sql/core/src/main/scala/org/apache/spark/sql/execution/python/ArrowEvalPythonExec.scala
##########
@@ -24,9 +24,10 @@ import org.apache.spark.api.python.ChainedPythonFunctions
 import org.apache.spark.sql.catalyst.InternalRow
 import org.apache.spark.sql.catalyst.expressions._
 import org.apache.spark.sql.execution.SparkPlan
-import org.apache.spark.sql.types.StructType
+import org.apache.spark.sql.types._

Review comment:
       and avoid wildcard import




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org