You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by Niranda Perera <ni...@gmail.com> on 2016/02/29 06:50:27 UTC

Control the stdout and stderr streams in a executor JVM

Hi all,

Is there any possibility to control the stdout and stderr streams in an
executor JVM?

I understand that there are some configurations provided from the spark
conf as follows
spark.executor.logs.rolling.maxRetainedFiles
spark.executor.logs.rolling.maxSize
spark.executor.logs.rolling.strategy
spark.executor.logs.rolling.time.interval

But is there a possibility to have more fine grained control over these,
like we do in a log4j appender, with a property file?

Rgds
-- 
Niranda
@n1r44 <https://twitter.com/N1R44>
+94-71-554-8430
https://pythagoreanscript.wordpress.com/

Re: Control the stdout and stderr streams in a executor JVM

Posted by Jeff Zhang <zj...@gmail.com>.
You can create log4j.properties for executors, and use "--files
log4j.properties" when submitting spark jobs.

On Mon, Feb 29, 2016 at 1:50 PM, Niranda Perera <ni...@gmail.com>
wrote:

> Hi all,
>
> Is there any possibility to control the stdout and stderr streams in an
> executor JVM?
>
> I understand that there are some configurations provided from the spark
> conf as follows
> spark.executor.logs.rolling.maxRetainedFiles
> spark.executor.logs.rolling.maxSize
> spark.executor.logs.rolling.strategy
> spark.executor.logs.rolling.time.interval
>
> But is there a possibility to have more fine grained control over these,
> like we do in a log4j appender, with a property file?
>
> Rgds
> --
> Niranda
> @n1r44 <https://twitter.com/N1R44>
> +94-71-554-8430
> https://pythagoreanscript.wordpress.com/
>



-- 
Best Regards

Jeff Zhang