You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "ueshin (via GitHub)" <gi...@apache.org> on 2023/04/29 01:45:48 UTC

[GitHub] [spark] ueshin opened a new pull request, #40998: [SPARK-43323][SQL][PYTHON] Fix DataFrame.toPandas with Arrow enabled to handle exceptions properly

ueshin opened a new pull request, #40998:
URL: https://github.com/apache/spark/pull/40998

   ### What changes were proposed in this pull request?
   
   Fixes `DataFrame.toPandas` with Arrow enabled to handle exceptions properly.
   
   ```py
   >>> spark.conf.set("spark.sql.ansi.enabled", True)
   >>> spark.conf.set('spark.sql.execution.arrow.pyspark.enabled', True)
   >>> spark.sql("select 1/0").toPandas()
   ...
   Traceback (most recent call last):
   ...
   pyspark.errors.exceptions.captured.ArithmeticException: [DIVIDE_BY_ZERO] Division by zero. Use `try_divide` to tolerate divisor being 0 and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
   == SQL(line 1, position 8) ==
   select 1/0
          ^^^
   
   ```
   
   ### Why are the changes needed?
   
   Currently `DataFrame.toPandas` doesn't capture exceptions happened in Spark properly.
   
   ```py
   >>> spark.conf.set("spark.sql.ansi.enabled", True)
   >>> spark.conf.set('spark.sql.execution.arrow.pyspark.enabled', True)
   >>> spark.sql("select 1/0").toPandas()
   ...
     An error occurred while calling o53.getResult.
   : org.apache.spark.SparkException: Exception thrown in awaitResult:
   	at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:322)
   ...
   ```
   
   because `jsocket_auth_server.getResult()` always wraps the thrown exceptions with `SparkException` that won't be captured.
   
   Whereas without Arrow:
   
   ```py
   >>> spark.conf.set("spark.sql.ansi.enabled", True)
   >>> spark.conf.set('spark.sql.execution.arrow.pyspark.enabled', False)
   >>> spark.sql("select 1/0").toPandas()
   Traceback (most recent call last):
   ...
   pyspark.errors.exceptions.captured.ArithmeticException: [DIVIDE_BY_ZERO] Division by zero. Use `try_divide` to tolerate divisor being 0 and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
   == SQL(line 1, position 8) ==
   select 1/0
          ^^^
   ```
   
   ### Does this PR introduce _any_ user-facing change?
   
   `DataFrame.toPandas` with Arrow enabled will show a proper exception.
   
   ### How was this patch tested?
   
   Added the related test.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on pull request #40998: [SPARK-43323][SQL][PYTHON] Fix DataFrame.toPandas with Arrow enabled to handle exceptions properly

Posted by "HyukjinKwon (via GitHub)" <gi...@apache.org>.
HyukjinKwon commented on PR #40998:
URL: https://github.com/apache/spark/pull/40998#issuecomment-1530538764

   Merged to master.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on a diff in pull request #40998: [SPARK-43323][SQL][PYTHON] Fix DataFrame.toPandas with Arrow enabled to handle exceptions properly

Posted by "HyukjinKwon (via GitHub)" <gi...@apache.org>.
HyukjinKwon commented on code in PR #40998:
URL: https://github.com/apache/spark/pull/40998#discussion_r1180942169


##########
python/pyspark/sql/pandas/conversion.py:
##########
@@ -357,8 +357,11 @@ def _collect_as_arrow(self, split_batches: bool = False) -> List["pa.RecordBatch
             else:
                 results = list(batch_stream)
         finally:
-            # Join serving thread and raise any exceptions from collectAsArrowToPython
-            jsocket_auth_server.getResult()
+            from pyspark.errors.exceptions.captured import unwrap_spark_exception

Review Comment:
   I think we can just import on the top.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon closed pull request #40998: [SPARK-43323][SQL][PYTHON] Fix DataFrame.toPandas with Arrow enabled to handle exceptions properly

Posted by "HyukjinKwon (via GitHub)" <gi...@apache.org>.
HyukjinKwon closed pull request #40998: [SPARK-43323][SQL][PYTHON] Fix DataFrame.toPandas with Arrow enabled to handle exceptions properly
URL: https://github.com/apache/spark/pull/40998


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org