You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2018/02/28 15:45:00 UTC

[jira] [Resolved] (SPARK-23517) Make pyspark.util._exception_message produce the trace from Java side for Py4JJavaError

     [ https://issues.apache.org/jira/browse/SPARK-23517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon resolved SPARK-23517.
----------------------------------
       Resolution: Fixed
    Fix Version/s: 2.3.1

Issue resolved by pull request 20680
[https://github.com/apache/spark/pull/20680]

> Make pyspark.util._exception_message produce the trace from Java side for Py4JJavaError
> ---------------------------------------------------------------------------------------
>
>                 Key: SPARK-23517
>                 URL: https://issues.apache.org/jira/browse/SPARK-23517
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 2.3.0
>            Reporter: Hyukjin Kwon
>            Assignee: Hyukjin Kwon
>            Priority: Minor
>             Fix For: 2.3.1
>
>
> Currently {{pyspark.util._exception_message}} doesn't show its trace and message from Py4JJavaError as below:
> {code}
> >>> from pyspark.util import _exception_message
> >>> try:
> ...     sc._jvm.java.lang.String(None)
> ... except Exception as e:
> ...     pass
> ...
> >>> e.message
> ''
> {code}
> This is actually a problem in some code paths we can expect this error. For example, see
> {code}
> from pyspark.sql.functions import udf
> spark.conf.set("spark.sql.execution.arrow.enabled", True)
> spark.range(1).select(udf(lambda x: [[]])()).toPandas()
> {code}
> {code}
> Traceback (most recent call last):
>   File "<stdin>", line 1, in <module>
>   File "/.../spark/python/pyspark/sql/dataframe.py", line 2009, in toPandas
>     raise RuntimeError("%s\n%s" % (_exception_message(e), msg))
> RuntimeError:
> Note: toPandas attempted Arrow optimization because 'spark.sql.execution.arrow.enabled' is set to true. Please set it to false to disable this.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org