You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (Jira)" <ji...@apache.org> on 2022/08/09 09:50:00 UTC

[jira] [Commented] (SPARK-37348) PySpark pmod function

    [ https://issues.apache.org/jira/browse/SPARK-37348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17577322#comment-17577322 ] 

Apache Spark commented on SPARK-37348:
--------------------------------------

User 'zhengruifeng' has created a pull request for this issue:
https://github.com/apache/spark/pull/37449

> PySpark pmod function
> ---------------------
>
>                 Key: SPARK-37348
>                 URL: https://issues.apache.org/jira/browse/SPARK-37348
>             Project: Spark
>          Issue Type: Improvement
>          Components: PySpark
>    Affects Versions: 3.2.0
>            Reporter: Tim Schwab
>            Priority: Minor
>
> Because Spark is built on the JVM, in PySpark, F.lit(-1) % F.lit(2) returns -1. However, the modulus is often desired instead of the remainder.
>  
> There is a [PMOD() function in Spark SQL|https://spark.apache.org/docs/latest/api/sql/#pmod], but [not in PySpark|https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql.html#functions]. So at the moment, the two options for getting the modulus is to use F.expr("pmod(A, B)"), or create a helper function such as:
>  
> {code:java}
> def pmod(dividend, divisor):
>     return F.when(dividend < 0, (dividend % divisor) + divisor).otherwise(dividend % divisor){code}
>  
>  
> Neither are optimal - pmod should be native to PySpark as it is in Spark SQL.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org