You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (Jira)" <ji...@apache.org> on 2019/08/27 06:49:00 UTC

[jira] [Commented] (SPARK-27992) PySpark socket server should sync with JVM connection thread future

    [ https://issues.apache.org/jira/browse/SPARK-27992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16916446#comment-16916446 ] 

Hyukjin Kwon commented on SPARK-27992:
--------------------------------------

I am increasing the priority to blocker per SPARK-28881. cc [~dongjoon]

> PySpark socket server should sync with JVM connection thread future
> -------------------------------------------------------------------
>
>                 Key: SPARK-27992
>                 URL: https://issues.apache.org/jira/browse/SPARK-27992
>             Project: Spark
>          Issue Type: Improvement
>          Components: PySpark
>    Affects Versions: 3.0.0
>            Reporter: Bryan Cutler
>            Assignee: Bryan Cutler
>            Priority: Blocker
>             Fix For: 3.0.0
>
>
> Both SPARK-27805 and SPARK-27548 identified an issue that errors in a Spark job are not propagated to Python. This is because toLocalIterator() and toPandas() with Arrow enabled run Spark jobs asynchronously in a background thread, after creating the socket connection info. The fix for these was to catch a SparkException if the job errored and then send the exception through the pyspark serializer.
> A better fix would be to allow Python to await on the serving thread future and join the thread. That way if the serving thread throws an exception, it will be propagated on the call to awaitResult.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org