You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flume.apache.org by Sutanu Das <sd...@att.com> on 2016/02/24 00:54:17 UTC

Spark suppress INFO messages per Streaming Job

Community,

How can I suppress INFO messages  from Spark Streaming job for per job ? ........ meaning, I don't want to change the log4j properties for the entire Spark cluster but want to suppress just the INFO messages for a specific Streaming job perhaps in the job properties file, Is that possible?

Or, Do I need to write the sc._jvm.Logging function inside our scala code to suppress INFO messages of RDDs?

Please help us, else, the Streaming job output re-direct log is so BIG with those INFO messages, our  file system is getting full. Thanks again.

Re: Spark suppress INFO messages per Streaming Job

Posted by Gonzalo Herreros <gh...@gmail.com>.
The way I have done that is by having a copy the spark config folder with
the updated log4j settings and running the job with the flag that points to
that configuration folder.
The drawback is that if you change other Spark settings for the cluster,
that job won't be updated.

I guess other options are linking the config files in that alternative
config folder or maybe adding a log4j configuration in front of the
driver/executor classpath with the -extraClasspath options.

Maybe in the Spark user list people know of better ways.

Gonzalo

On 23 February 2016 at 23:54, Sutanu Das <sd...@att.com> wrote:

> Community,
>
>
>
> How can I suppress INFO messages  from Spark Streaming job for per job ?
> …….. meaning, I don’t want to change the log4j properties for the entire
> Spark cluster but want to suppress just the INFO messages for a specific
> Streaming job perhaps in the job properties file, Is that possible?
>
>
>
> Or, Do I need to write the sc._jvm.Logging function inside our scala code
> to suppress INFO messages of RDDs?
>
>
>
> Please help us, else, the Streaming job output re-direct log is so BIG
> with those INFO messages, our  file system is getting full. Thanks again.
>