You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "zhengruifeng (via GitHub)" <gi...@apache.org> on 2023/08/21 10:48:25 UTC

[GitHub] [spark] zhengruifeng commented on a diff in pull request #42584: [SPARK-44891][PYTHON][CONNECT] Enable Doctests of `rand`, `randn` and `log`

zhengruifeng commented on code in PR #42584:
URL: https://github.com/apache/spark/pull/42584#discussion_r1299944852


##########
python/pyspark/sql/functions.py:
##########
@@ -5146,26 +5146,27 @@ def log(arg1: Union["ColumnOrName", float], arg2: Optional["ColumnOrName"] = Non
 
     Examples
     --------
-    >>> df = spark.createDataFrame([10, 100, 1000], "INT")
-    >>> df.select(log(10.0, df.value).alias('ten')).show() # doctest: +SKIP
-    +---+
-    |ten|
-    +---+

Review Comment:
   the previous values are:
   ```
   +------------------+
   |               1.0|
   |               2.0|
   |2.9999999999999996|
   +------------------+
   ```
   
   I think the previous doctest was skipped due to the the computation error.
   
   With new input values (2,4,8) and new logarithm 2.0, the output happens to be accurate.
   So make this change.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org