You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/11/01 07:20:38 UTC

[GitHub] [spark] grundprinzip opened a new pull request, #38462: [SPARK-40533] [CONNECT] [PYTHON] Support most built-in literal types for Python in Spark Connect

grundprinzip opened a new pull request, #38462:
URL: https://github.com/apache/spark/pull/38462

   ### What changes were proposed in this pull request?
   
   This PR implements the client-side serialization of most Python literals into Spark Connect literals. 
   
   ### Why are the changes needed?
   Expanding the Python client support.
   
   ### Does this PR introduce _any_ user-facing change?
   No
   
   ### How was this patch tested?
   UT


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on a diff in pull request #38462: [SPARK-40533] [CONNECT] [PYTHON] Support most built-in literal types for Python in Spark Connect

Posted by GitBox <gi...@apache.org>.
HyukjinKwon commented on code in PR #38462:
URL: https://github.com/apache/spark/pull/38462#discussion_r1012491071


##########
python/pyspark/sql/connect/_typing.py:
##########
@@ -15,5 +15,7 @@
 # limitations under the License.
 #
 from typing import Union
+from datetime import date, time, datetime
 
 PrimitiveType = Union[str, int, bool, float]
+LiteralType = Union[PrimitiveType, Union[date, time, datetime]]

Review Comment:
   Seems not used.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on a diff in pull request #38462: [SPARK-40533] [CONNECT] [PYTHON] Support most built-in literal types for Python in Spark Connect

Posted by GitBox <gi...@apache.org>.
HyukjinKwon commented on code in PR #38462:
URL: https://github.com/apache/spark/pull/38462#discussion_r1013928764


##########
python/pyspark/sql/connect/column.py:
##########
@@ -99,11 +101,59 @@ def to_plan(self, session: Optional["RemoteSparkSession"]) -> "proto.Expression"
         value_type = type(self._value)
         exp = proto.Expression()
         if value_type is int:
-            exp.literal.i32 = cast(int, self._value)
+            exp.literal.i64 = cast(int, self._value)
+        elif value_type is bool:
+            exp.literal.boolean = cast(bool, self._value)
         elif value_type is str:
             exp.literal.string = cast(str, self._value)
         elif value_type is float:
             exp.literal.fp64 = cast(float, self._value)
+        elif value_type is decimal.Decimal:
+            d_v = cast(decimal.Decimal, self._value)
+            v_tuple = d_v.as_tuple()
+            exp.literal.decimal.scale = abs(v_tuple.exponent)
+            exp.literal.decimal.precision = len(v_tuple.digits) - abs(v_tuple.exponent)
+            # Two complement yeah...
+            raise ValueError("Python Decimal not supported.")

Review Comment:
   Can we remove if this is not implemented yet?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] grundprinzip commented on a diff in pull request #38462: [SPARK-40533] [CONNECT] [PYTHON] Support most built-in literal types for Python in Spark Connect

Posted by GitBox <gi...@apache.org>.
grundprinzip commented on code in PR #38462:
URL: https://github.com/apache/spark/pull/38462#discussion_r1012525730


##########
python/pyspark/sql/connect/column.py:
##########
@@ -99,11 +101,59 @@ def to_plan(self, session: Optional["RemoteSparkSession"]) -> "proto.Expression"
         value_type = type(self._value)
         exp = proto.Expression()
         if value_type is int:
-            exp.literal.i32 = cast(int, self._value)
+            exp.literal.i64 = cast(int, self._value)
+        elif value_type is bool:
+            exp.literal.boolean = cast(bool, self._value)
         elif value_type is str:
             exp.literal.string = cast(str, self._value)
         elif value_type is float:
             exp.literal.fp64 = cast(float, self._value)
+        elif value_type is decimal.Decimal:
+            d_v = cast(decimal.Decimal, self._value)
+            v_tuple = d_v.as_tuple()
+            exp.literal.decimal.scale = abs(v_tuple.exponent)
+            exp.literal.decimal.precision = len(v_tuple.digits) - abs(v_tuple.exponent)
+            # Two complement yeah...
+            raise ValueError("cannnt....")
+        elif value_type is bytes:
+            exp.literal.binary = self._value
+        elif value_type is datetime.datetime:
+            # Microseconds since epoch.
+            dt = cast(datetime.datetime, self._value)
+            v = dt - datetime.datetime(1970, 1, 1, 0, 0, 0, 0)
+            exp.literal.timestamp = int(v / datetime.timedelta(microseconds=1))
+        elif value_type is datetime.time:
+            # Nanoseconds of the day.
+            tv = cast(datetime.time, self._value)
+            offset = (tv.second + tv.minute * 60 + tv.hour * 3600) * 1000 + tv.microsecond
+            exp.literal.time = int(offset * 1000)
+        elif value_type is datetime.date:
+            # Days since epoch.
+            days_since_epoch = (cast(datetime.date, self._value) - datetime.date(1970, 1, 1)).days
+            exp.literal.date = days_since_epoch
+        elif value_type is uuid.UUID:

Review Comment:
   ack, will remove.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] grundprinzip commented on pull request #38462: [SPARK-40533] [CONNECT] [PYTHON] Support most built-in literal types for Python in Spark Connect

Posted by GitBox <gi...@apache.org>.
grundprinzip commented on PR #38462:
URL: https://github.com/apache/spark/pull/38462#issuecomment-1302100788

   @HyukjinKwon removed the UUID support, please have another look.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] grundprinzip commented on a diff in pull request #38462: [SPARK-40533] [CONNECT] [PYTHON] Support most built-in literal types for Python in Spark Connect

Posted by GitBox <gi...@apache.org>.
grundprinzip commented on code in PR #38462:
URL: https://github.com/apache/spark/pull/38462#discussion_r1012884253


##########
python/pyspark/sql/connect/column.py:
##########
@@ -99,11 +101,59 @@ def to_plan(self, session: Optional["RemoteSparkSession"]) -> "proto.Expression"
         value_type = type(self._value)
         exp = proto.Expression()
         if value_type is int:
-            exp.literal.i32 = cast(int, self._value)
+            exp.literal.i64 = cast(int, self._value)
+        elif value_type is bool:
+            exp.literal.boolean = cast(bool, self._value)
         elif value_type is str:
             exp.literal.string = cast(str, self._value)
         elif value_type is float:
             exp.literal.fp64 = cast(float, self._value)
+        elif value_type is decimal.Decimal:
+            d_v = cast(decimal.Decimal, self._value)
+            v_tuple = d_v.as_tuple()
+            exp.literal.decimal.scale = abs(v_tuple.exponent)
+            exp.literal.decimal.precision = len(v_tuple.digits) - abs(v_tuple.exponent)
+            # Two complement yeah...
+            raise ValueError("cannnt....")
+        elif value_type is bytes:
+            exp.literal.binary = self._value
+        elif value_type is datetime.datetime:
+            # Microseconds since epoch.
+            dt = cast(datetime.datetime, self._value)
+            v = dt - datetime.datetime(1970, 1, 1, 0, 0, 0, 0)
+            exp.literal.timestamp = int(v / datetime.timedelta(microseconds=1))
+        elif value_type is datetime.time:
+            # Nanoseconds of the day.
+            tv = cast(datetime.time, self._value)
+            offset = (tv.second + tv.minute * 60 + tv.hour * 3600) * 1000 + tv.microsecond
+            exp.literal.time = int(offset * 1000)
+        elif value_type is datetime.date:
+            # Days since epoch.
+            days_since_epoch = (cast(datetime.date, self._value) - datetime.date(1970, 1, 1)).days
+            exp.literal.date = days_since_epoch
+        elif value_type is uuid.UUID:

Review Comment:
   Done.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on a diff in pull request #38462: [SPARK-40533] [CONNECT] [PYTHON] Support most built-in literal types for Python in Spark Connect

Posted by GitBox <gi...@apache.org>.
HyukjinKwon commented on code in PR #38462:
URL: https://github.com/apache/spark/pull/38462#discussion_r1012489260


##########
python/pyspark/sql/connect/column.py:
##########
@@ -99,11 +101,59 @@ def to_plan(self, session: Optional["RemoteSparkSession"]) -> "proto.Expression"
         value_type = type(self._value)
         exp = proto.Expression()
         if value_type is int:
-            exp.literal.i32 = cast(int, self._value)
+            exp.literal.i64 = cast(int, self._value)
+        elif value_type is bool:
+            exp.literal.boolean = cast(bool, self._value)
         elif value_type is str:
             exp.literal.string = cast(str, self._value)
         elif value_type is float:
             exp.literal.fp64 = cast(float, self._value)
+        elif value_type is decimal.Decimal:
+            d_v = cast(decimal.Decimal, self._value)
+            v_tuple = d_v.as_tuple()
+            exp.literal.decimal.scale = abs(v_tuple.exponent)
+            exp.literal.decimal.precision = len(v_tuple.digits) - abs(v_tuple.exponent)
+            # Two complement yeah...
+            raise ValueError("cannnt....")
+        elif value_type is bytes:
+            exp.literal.binary = self._value
+        elif value_type is datetime.datetime:
+            # Microseconds since epoch.
+            dt = cast(datetime.datetime, self._value)
+            v = dt - datetime.datetime(1970, 1, 1, 0, 0, 0, 0)
+            exp.literal.timestamp = int(v / datetime.timedelta(microseconds=1))
+        elif value_type is datetime.time:
+            # Nanoseconds of the day.
+            tv = cast(datetime.time, self._value)
+            offset = (tv.second + tv.minute * 60 + tv.hour * 3600) * 1000 + tv.microsecond
+            exp.literal.time = int(offset * 1000)
+        elif value_type is datetime.date:
+            # Days since epoch.
+            days_since_epoch = (cast(datetime.date, self._value) - datetime.date(1970, 1, 1)).days
+            exp.literal.date = days_since_epoch
+        elif value_type is uuid.UUID:

Review Comment:
   Hm, this isn't actually supported in current PySpark's `lit`. Should we maybe exclude this for now?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon closed pull request #38462: [SPARK-40533] [CONNECT] [PYTHON] Support most built-in literal types for Python in Spark Connect

Posted by GitBox <gi...@apache.org>.
HyukjinKwon closed pull request #38462: [SPARK-40533] [CONNECT] [PYTHON] Support most built-in literal types for Python in Spark Connect
URL: https://github.com/apache/spark/pull/38462


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] grundprinzip commented on pull request #38462: [SPARK-40533] [CONNECT] [PYTHON] Support most built-in literal types for Python in Spark Connect

Posted by GitBox <gi...@apache.org>.
grundprinzip commented on PR #38462:
URL: https://github.com/apache/spark/pull/38462#issuecomment-1298698629

   R: @HyukjinKwon @zhengruifeng @amaliujia 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on a diff in pull request #38462: [SPARK-40533] [CONNECT] [PYTHON] Support most built-in literal types for Python in Spark Connect

Posted by GitBox <gi...@apache.org>.
HyukjinKwon commented on code in PR #38462:
URL: https://github.com/apache/spark/pull/38462#discussion_r1013930085


##########
python/pyspark/sql/connect/column.py:
##########
@@ -99,11 +101,59 @@ def to_plan(self, session: Optional["RemoteSparkSession"]) -> "proto.Expression"
         value_type = type(self._value)
         exp = proto.Expression()
         if value_type is int:
-            exp.literal.i32 = cast(int, self._value)
+            exp.literal.i64 = cast(int, self._value)
+        elif value_type is bool:
+            exp.literal.boolean = cast(bool, self._value)
         elif value_type is str:
             exp.literal.string = cast(str, self._value)
         elif value_type is float:
             exp.literal.fp64 = cast(float, self._value)
+        elif value_type is decimal.Decimal:
+            d_v = cast(decimal.Decimal, self._value)
+            v_tuple = d_v.as_tuple()
+            exp.literal.decimal.scale = abs(v_tuple.exponent)
+            exp.literal.decimal.precision = len(v_tuple.digits) - abs(v_tuple.exponent)
+            # Two complement yeah...
+            raise ValueError("Python Decimal not supported.")
+        elif value_type is bytes:
+            exp.literal.binary = self._value
+        elif value_type is datetime.datetime:
+            # Microseconds since epoch.
+            dt = cast(datetime.datetime, self._value)
+            v = dt - datetime.datetime(1970, 1, 1, 0, 0, 0, 0)
+            exp.literal.timestamp = int(v / datetime.timedelta(microseconds=1))
+        elif value_type is datetime.time:
+            # Nanoseconds of the day.
+            tv = cast(datetime.time, self._value)
+            offset = (tv.second + tv.minute * 60 + tv.hour * 3600) * 1000 + tv.microsecond
+            exp.literal.time = int(offset * 1000)
+        elif value_type is datetime.date:
+            # Days since epoch.
+            days_since_epoch = (cast(datetime.date, self._value) - datetime.date(1970, 1, 1)).days
+            exp.literal.date = days_since_epoch
+        elif value_type is uuid.UUID:

Review Comment:
   Maybe we could remove `elif` so `else` branch throw the exception? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on a diff in pull request #38462: [SPARK-40533] [CONNECT] [PYTHON] Support most built-in literal types for Python in Spark Connect

Posted by GitBox <gi...@apache.org>.
HyukjinKwon commented on code in PR #38462:
URL: https://github.com/apache/spark/pull/38462#discussion_r1013933320


##########
python/pyspark/sql/connect/column.py:
##########
@@ -99,11 +101,59 @@ def to_plan(self, session: Optional["RemoteSparkSession"]) -> "proto.Expression"
         value_type = type(self._value)
         exp = proto.Expression()
         if value_type is int:
-            exp.literal.i32 = cast(int, self._value)
+            exp.literal.i64 = cast(int, self._value)
+        elif value_type is bool:
+            exp.literal.boolean = cast(bool, self._value)
         elif value_type is str:
             exp.literal.string = cast(str, self._value)
         elif value_type is float:
             exp.literal.fp64 = cast(float, self._value)
+        elif value_type is decimal.Decimal:
+            d_v = cast(decimal.Decimal, self._value)
+            v_tuple = d_v.as_tuple()
+            exp.literal.decimal.scale = abs(v_tuple.exponent)
+            exp.literal.decimal.precision = len(v_tuple.digits) - abs(v_tuple.exponent)
+            # Two complement yeah...
+            raise ValueError("Python Decimal not supported.")
+        elif value_type is bytes:
+            exp.literal.binary = self._value
+        elif value_type is datetime.datetime:
+            # Microseconds since epoch.
+            dt = cast(datetime.datetime, self._value)
+            v = dt - datetime.datetime(1970, 1, 1, 0, 0, 0, 0)
+            exp.literal.timestamp = int(v / datetime.timedelta(microseconds=1))

Review Comment:
   Can we maybe match the implementation in PySpark's: https://github.com/apache/spark/blob/master/python/pyspark/sql/types.py#L254-L260 ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on pull request #38462: [SPARK-40533] [CONNECT] [PYTHON] Support most built-in literal types for Python in Spark Connect

Posted by GitBox <gi...@apache.org>.
HyukjinKwon commented on PR #38462:
URL: https://github.com/apache/spark/pull/38462#issuecomment-1303686102

   Merged to master.
   
   Let's address complete types in a followup.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] AmplabJenkins commented on pull request #38462: [SPARK-40533] [CONNECT] [PYTHON] Support most built-in literal types for Python in Spark Connect

Posted by GitBox <gi...@apache.org>.
AmplabJenkins commented on PR #38462:
URL: https://github.com/apache/spark/pull/38462#issuecomment-1299294257

   Can one of the admins verify this patch?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org