You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/08/26 00:32:14 UTC

[GitHub] [spark] HyukjinKwon commented on a diff in pull request #37635: [SPARK-40131][PYTHON] Support NumPy ndarray in built-in functions

HyukjinKwon commented on code in PR #37635:
URL: https://github.com/apache/spark/pull/37635#discussion_r955524424


##########
python/pyspark/sql/types.py:
##########
@@ -2268,12 +2268,40 @@ def convert(self, obj: "np.generic", gateway_client: GatewayClient) -> Any:
         return obj.item()
 
 
+class NumpyArrayConverter:
+    def can_convert(self, obj: Any) -> bool:
+        return has_numpy and isinstance(obj, np.ndarray)
+
+    def convert(self, obj: "np.ndarray", gateway_client: GatewayClient) -> JavaObject:
+        from pyspark import SparkContext
+
+        gateway = SparkContext._gateway
+        assert gateway is not None
+
+        plist = obj.tolist()
+        # np.array([]).dtype is dtype('float64') so set float for empty plist
+        ptpe = type(plist[0]) if len(plist) > 0 else float
+        tpe_dict = {
+            int: gateway.jvm.int,

Review Comment:
   Shouldn't we map this type from NumPy dtype?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org