You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Patrick Wendell (JIRA)" <ji...@apache.org> on 2014/06/11 05:23:02 UTC

[jira] [Resolved] (SPARK-1940) Enable rolling of executor logs (stdout / stderr)

     [ https://issues.apache.org/jira/browse/SPARK-1940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Patrick Wendell resolved SPARK-1940.
------------------------------------

       Resolution: Fixed
    Fix Version/s: 1.1.0

Issue resolved by pull request 895
[https://github.com/apache/spark/pull/895]

> Enable rolling of executor logs (stdout / stderr)
> -------------------------------------------------
>
>                 Key: SPARK-1940
>                 URL: https://issues.apache.org/jira/browse/SPARK-1940
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>            Reporter: Tathagata Das
>            Assignee: Tathagata Das
>             Fix For: 1.1.0
>
>
> Currently, in the default log4j configuration, all the executor logs get sent to the file <code>[executor-working-dir]/stderr</code>. This does not all log files to be rolled, so old logs cannot be removed. 
> Using log4j RollingFileAppender allows log4j logs to be rolled, but all the logs get sent to a different set of files, other than the files <code>stdout</code> and <code>stderr</code> . So the logs are not visible in the Spark web UI any more as Spark web UI only reads the files <code>stdout</code> and <code>stderr</code>. Furthermore, it still does not allow the stdout and stderr to be cleared periodically in case a large amount of stuff gets written to them (e.g. by explicit println inside map function).
> Solving this requires rolling of the logs in such a way that Spark web UI is aware of it and can retrieve the logs across the rolled-over files.



--
This message was sent by Atlassian JIRA
(v6.2#6252)