You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Tim Schwab (Jira)" <ji...@apache.org> on 2021/11/16 16:21:00 UTC

[jira] [Updated] (SPARK-37348) PySpark pmod function

     [ https://issues.apache.org/jira/browse/SPARK-37348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Tim Schwab updated SPARK-37348:
-------------------------------
    Description: 
Because Spark is built on the JVM, in PySpark, F.lit(-1) % F.lit(2) returns -1. However, the modulus is often desired instead of the remainder.

 

There is a [PMOD() function in Spark SQL|https://spark.apache.org/docs/latest/api/sql/#pmod], but [not in PySpark|https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql.html#functions]. So at the moment, the two options for getting the modulus is to use F.expr("pmod(A, B)"), or create a helper function such as:
 
{code:java}
def pmod(dividend, divisor):
    return F.when(dividend < 0, (dividend % divisor) + divisor).otherwise(dividend % divisor){code}
 
 
Neither are optimal - pmod should be native to PySpark as it is in Spark SQL.

  was:
Because Spark is built on the JVM, in PySpark, F.lit(-1) % F.lit(2) returns -1. However, the modulus is often desired instead of the remainder.

 

There is a PMOD() function in Spark SQL, but not in PySpark. So at the moment, the two options for getting the modulus is to use F.expr("pmod(A, B)"), or create a helper function such as:
 
{code:java}
def pmod(dividend, divisor):
    return F.when(dividend < 0, (dividend % divisor) + divisor).otherwise(dividend % divisor){code}
 
 
Neither are optimal - pmod should be native to PySpark as it is in Spark SQL.


> PySpark pmod function
> ---------------------
>
>                 Key: SPARK-37348
>                 URL: https://issues.apache.org/jira/browse/SPARK-37348
>             Project: Spark
>          Issue Type: New Feature
>          Components: PySpark
>    Affects Versions: 3.2.0
>            Reporter: Tim Schwab
>            Priority: Minor
>              Labels: newbie
>
> Because Spark is built on the JVM, in PySpark, F.lit(-1) % F.lit(2) returns -1. However, the modulus is often desired instead of the remainder.
>  
> There is a [PMOD() function in Spark SQL|https://spark.apache.org/docs/latest/api/sql/#pmod], but [not in PySpark|https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql.html#functions]. So at the moment, the two options for getting the modulus is to use F.expr("pmod(A, B)"), or create a helper function such as:
>  
> {code:java}
> def pmod(dividend, divisor):
>     return F.when(dividend < 0, (dividend % divisor) + divisor).otherwise(dividend % divisor){code}
>  
>  
> Neither are optimal - pmod should be native to PySpark as it is in Spark SQL.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org