You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "holdenk (JIRA)" <ji...@apache.org> on 2018/08/24 21:23:00 UTC

[jira] [Resolved] (SPARK-19094) Plumb through logging/error messages from the JVM to Jupyter PySpark

     [ https://issues.apache.org/jira/browse/SPARK-19094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

holdenk resolved SPARK-19094.
-----------------------------
    Resolution: Won't Fix

No longer as important given other changes.

> Plumb through logging/error messages from the JVM to Jupyter PySpark
> --------------------------------------------------------------------
>
>                 Key: SPARK-19094
>                 URL: https://issues.apache.org/jira/browse/SPARK-19094
>             Project: Spark
>          Issue Type: Improvement
>          Components: PySpark
>            Reporter: holdenk
>            Priority: Trivial
>
> Jupyter/IPython notebooks works by overriding sys.stdout & sys.stderr, as such the error messages that show up in IJupyter/IPython are often missing the related logs - which is often more useful than the exception its self.
> This could make it easier for Python developers getting started with Spark on their local laptops to debug their applications, since otherwise they need to remember to keep going to the terminal where they launched the notebook from.
> One counterpoint to this is that Spark's logging is fairly verbose, but since we provide the ability for the user to tune the log messages from within the notebook that should be OK.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org