You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Raghav <ra...@gmail.com> on 2017/09/07 02:25:01 UTC

Reduce Kafka Client logging

Hi

My Java code produces Kafka config overtime it does a send which makes log
very very verbose.

How can I reduce the Kafka client (producer) logging in my java code ?

Thanks for your help.

-- 
Raghav

Re: Reduce Kafka Client logging

Posted by Raghav <ra...@gmail.com>.
Thanks, Kamal.

On Fri, Sep 8, 2017 at 4:10 AM, Kamal Chandraprakash <
kamal.chandraprakash@gmail.com> wrote:

> add this lines at the end of your log4j.properties,
>
> log4j.logger.org.apache.kafka.clients.producer=WARN
>
> On Thu, Sep 7, 2017 at 5:27 PM, Raghav <ra...@gmail.com> wrote:
>
> > Hi Viktor
> >
> > Can you pleas share the log4j config snippet that I should use. My Java
> > code's current log4j looks like this. How should I add this new entry
> that
> > you mentioned ? Thanks.
> >
> >
> > log4j.rootLogger=INFO, STDOUT
> >
> > log4j.appender.STDOUT=org.apache.log4j.ConsoleAppender
> > log4j.appender.STDOUT.layout=org.apache.log4j.PatternLayout
> > log4j.appender.STDOUT.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p
> > %c{2}:%L %m%n
> >
> > log4j.appender.file=org.apache.log4j.RollingFileAppender
> > log4j.appender.file.File=logfile.log
> > log4j.appender.file.layout=org.apache.log4j.PatternLayout
> > log4j.appender.file.layout.ConversionPattern=%d{dd-MM-yyyy HH:mm:ss}
> %-5p
> > %c{1}:%L - %m%n
> >
> > On Thu, Sep 7, 2017 at 2:34 AM, Viktor Somogyi <vi...@gmail.com>
> > wrote:
> >
> > > Hi Raghav,
> > >
> > > I think it is enough to raise the logging level
> > > of org.apache.kafka.clients.producer.ProducerConfig to WARN in log4j.
> > > Also I'd like to mention that if possible, don't recreate the Kafka
> > > producer each time. The protocol is designed for long-living
> connections
> > > and recreating the connection each time puts pressure on the TCP layer
> > (the
> > > connection is expensive) and also on Kafka as well which may result in
> > > broker failures (typically exceeding the maximum allowed number of file
> > > descriptors).
> > >
> > > HTH,
> > > Viktor
> > >
> > > On Thu, Sep 7, 2017 at 7:35 AM, Raghav <ra...@gmail.com> wrote:
> > >
> > > > Due to the nature of code, I have to open a connection to a different
> > > Kafka
> > > > broker each time, and send one message. We have several Kafka
> brokers.
> > So
> > > > my client log is full with the following logs. What log settings
> > should I
> > > > use in log4j just for Kafka producer logs ?
> > > >
> > > >
> > > > 17/09/07 04:44:04 INFO producer.ProducerConfig:180 ProducerConfig
> > values:
> > > >         acks = all
> > > >         batch.size = 16384
> > > >         block.on.buffer.full = false
> > > >         bootstrap.servers = [10.10.10.5:]
> > > >         buffer.memory = 33554432
> > > >         client.id =
> > > >         compression.type = none
> > > >         connections.max.idle.ms = 540000
> > > >         interceptor.classes = null
> > > >         key.serializer = class
> > > > org.apache.kafka.common.serialization.StringSerializer
> > > >         linger.ms = 1
> > > >         max.block.ms = 5000
> > > >         max.in.flight.requests.per.connection = 5
> > > >         max.request.size = 1048576
> > > >         metadata.fetch.timeout.ms = 60000
> > > >         metadata.max.age.ms = 300000
> > > >         metric.reporters = []
> > > >         metrics.num.samples = 2
> > > >         metrics.sample.window.ms = 30000
> > > >         partitioner.class = class
> > > > org.apache.kafka.clients.producer.internals.DefaultPartitioner
> > > >         receive.buffer.bytes = 32768
> > > >         reconnect.backoff.ms = 50
> > > >         request.timeout.ms = 5000
> > > >         retries = 0
> > > >         retry.backoff.ms = 100
> > > >         sasl.kerberos.kinit.cmd = /usr/bin/kinit
> > > >         sasl.kerberos.min.time.before.relogin = 60000
> > > >         sasl.kerberos.service.name = null
> > > >         sasl.kerberos.ticket.renew.jitter = 0.05
> > > >         sasl.kerberos.ticket.renew.window.factor = 0.8
> > > >         sasl.mechanism = GSSAPI
> > > >         security.protocol = PLAINTEXT
> > > >         send.buffer.bytes = 131072
> > > >         ssl.cipher.suites = null
> > > >         ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> > > >         ssl.endpoint.identification.algorithm = null
> > > >         ssl.key.password = null
> > > >         ssl.keymanager.algorithm = SunX509
> > > >         ssl.keystore.location = null
> > > >         ssl.keystore.password = null
> > > >         ssl.keystore.type = JKS
> > > >         ssl.protocol = TLS
> > > >         ssl.provider = null
> > > >         ssl.secure.random.implementation = null
> > > >         ssl.trustmanager.algorithm = PKIX
> > > >         ssl.truststore.location = null
> > > >         ssl.truststore.password = null
> > > >         ssl.truststore.type = JKS
> > > >         timeout.ms = 30000
> > > >         value.serializer = class
> > > > org.apache.kafka.common.serialization.StringSerializer
> > > >
> > > > On Wed, Sep 6, 2017 at 9:37 PM, Jaikiran Pai <
> jai.forums2013@gmail.com
> > >
> > > > wrote:
> > > >
> > > > > Can you post the exact log messages that you are seeing?
> > > > >
> > > > > -Jaikiran
> > > > >
> > > > >
> > > > >
> > > > > On 07/09/17 7:55 AM, Raghav wrote:
> > > > >
> > > > >> Hi
> > > > >>
> > > > >> My Java code produces Kafka config overtime it does a send which
> > makes
> > > > log
> > > > >> very very verbose.
> > > > >>
> > > > >> How can I reduce the Kafka client (producer) logging in my java
> > code ?
> > > > >>
> > > > >> Thanks for your help.
> > > > >>
> > > > >>
> > > > >
> > > >
> > > >
> > > > --
> > > > Raghav
> > > >
> > >
> >
> >
> >
> > --
> > Raghav
> >
>



-- 
Raghav

Re: Reduce Kafka Client logging

Posted by Kamal Chandraprakash <ka...@gmail.com>.
add this lines at the end of your log4j.properties,

log4j.logger.org.apache.kafka.clients.producer=WARN

On Thu, Sep 7, 2017 at 5:27 PM, Raghav <ra...@gmail.com> wrote:

> Hi Viktor
>
> Can you pleas share the log4j config snippet that I should use. My Java
> code's current log4j looks like this. How should I add this new entry that
> you mentioned ? Thanks.
>
>
> log4j.rootLogger=INFO, STDOUT
>
> log4j.appender.STDOUT=org.apache.log4j.ConsoleAppender
> log4j.appender.STDOUT.layout=org.apache.log4j.PatternLayout
> log4j.appender.STDOUT.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p
> %c{2}:%L %m%n
>
> log4j.appender.file=org.apache.log4j.RollingFileAppender
> log4j.appender.file.File=logfile.log
> log4j.appender.file.layout=org.apache.log4j.PatternLayout
> log4j.appender.file.layout.ConversionPattern=%d{dd-MM-yyyy HH:mm:ss} %-5p
> %c{1}:%L - %m%n
>
> On Thu, Sep 7, 2017 at 2:34 AM, Viktor Somogyi <vi...@gmail.com>
> wrote:
>
> > Hi Raghav,
> >
> > I think it is enough to raise the logging level
> > of org.apache.kafka.clients.producer.ProducerConfig to WARN in log4j.
> > Also I'd like to mention that if possible, don't recreate the Kafka
> > producer each time. The protocol is designed for long-living connections
> > and recreating the connection each time puts pressure on the TCP layer
> (the
> > connection is expensive) and also on Kafka as well which may result in
> > broker failures (typically exceeding the maximum allowed number of file
> > descriptors).
> >
> > HTH,
> > Viktor
> >
> > On Thu, Sep 7, 2017 at 7:35 AM, Raghav <ra...@gmail.com> wrote:
> >
> > > Due to the nature of code, I have to open a connection to a different
> > Kafka
> > > broker each time, and send one message. We have several Kafka brokers.
> So
> > > my client log is full with the following logs. What log settings
> should I
> > > use in log4j just for Kafka producer logs ?
> > >
> > >
> > > 17/09/07 04:44:04 INFO producer.ProducerConfig:180 ProducerConfig
> values:
> > >         acks = all
> > >         batch.size = 16384
> > >         block.on.buffer.full = false
> > >         bootstrap.servers = [10.10.10.5:]
> > >         buffer.memory = 33554432
> > >         client.id =
> > >         compression.type = none
> > >         connections.max.idle.ms = 540000
> > >         interceptor.classes = null
> > >         key.serializer = class
> > > org.apache.kafka.common.serialization.StringSerializer
> > >         linger.ms = 1
> > >         max.block.ms = 5000
> > >         max.in.flight.requests.per.connection = 5
> > >         max.request.size = 1048576
> > >         metadata.fetch.timeout.ms = 60000
> > >         metadata.max.age.ms = 300000
> > >         metric.reporters = []
> > >         metrics.num.samples = 2
> > >         metrics.sample.window.ms = 30000
> > >         partitioner.class = class
> > > org.apache.kafka.clients.producer.internals.DefaultPartitioner
> > >         receive.buffer.bytes = 32768
> > >         reconnect.backoff.ms = 50
> > >         request.timeout.ms = 5000
> > >         retries = 0
> > >         retry.backoff.ms = 100
> > >         sasl.kerberos.kinit.cmd = /usr/bin/kinit
> > >         sasl.kerberos.min.time.before.relogin = 60000
> > >         sasl.kerberos.service.name = null
> > >         sasl.kerberos.ticket.renew.jitter = 0.05
> > >         sasl.kerberos.ticket.renew.window.factor = 0.8
> > >         sasl.mechanism = GSSAPI
> > >         security.protocol = PLAINTEXT
> > >         send.buffer.bytes = 131072
> > >         ssl.cipher.suites = null
> > >         ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> > >         ssl.endpoint.identification.algorithm = null
> > >         ssl.key.password = null
> > >         ssl.keymanager.algorithm = SunX509
> > >         ssl.keystore.location = null
> > >         ssl.keystore.password = null
> > >         ssl.keystore.type = JKS
> > >         ssl.protocol = TLS
> > >         ssl.provider = null
> > >         ssl.secure.random.implementation = null
> > >         ssl.trustmanager.algorithm = PKIX
> > >         ssl.truststore.location = null
> > >         ssl.truststore.password = null
> > >         ssl.truststore.type = JKS
> > >         timeout.ms = 30000
> > >         value.serializer = class
> > > org.apache.kafka.common.serialization.StringSerializer
> > >
> > > On Wed, Sep 6, 2017 at 9:37 PM, Jaikiran Pai <jai.forums2013@gmail.com
> >
> > > wrote:
> > >
> > > > Can you post the exact log messages that you are seeing?
> > > >
> > > > -Jaikiran
> > > >
> > > >
> > > >
> > > > On 07/09/17 7:55 AM, Raghav wrote:
> > > >
> > > >> Hi
> > > >>
> > > >> My Java code produces Kafka config overtime it does a send which
> makes
> > > log
> > > >> very very verbose.
> > > >>
> > > >> How can I reduce the Kafka client (producer) logging in my java
> code ?
> > > >>
> > > >> Thanks for your help.
> > > >>
> > > >>
> > > >
> > >
> > >
> > > --
> > > Raghav
> > >
> >
>
>
>
> --
> Raghav
>

Re: Reduce Kafka Client logging

Posted by Raghav <ra...@gmail.com>.
Hi Viktor

Can you pleas share the log4j config snippet that I should use. My Java
code's current log4j looks like this. How should I add this new entry that
you mentioned ? Thanks.


log4j.rootLogger=INFO, STDOUT

log4j.appender.STDOUT=org.apache.log4j.ConsoleAppender
log4j.appender.STDOUT.layout=org.apache.log4j.PatternLayout
log4j.appender.STDOUT.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p
%c{2}:%L %m%n

log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.File=logfile.log
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{dd-MM-yyyy HH:mm:ss} %-5p
%c{1}:%L - %m%n

On Thu, Sep 7, 2017 at 2:34 AM, Viktor Somogyi <vi...@gmail.com>
wrote:

> Hi Raghav,
>
> I think it is enough to raise the logging level
> of org.apache.kafka.clients.producer.ProducerConfig to WARN in log4j.
> Also I'd like to mention that if possible, don't recreate the Kafka
> producer each time. The protocol is designed for long-living connections
> and recreating the connection each time puts pressure on the TCP layer (the
> connection is expensive) and also on Kafka as well which may result in
> broker failures (typically exceeding the maximum allowed number of file
> descriptors).
>
> HTH,
> Viktor
>
> On Thu, Sep 7, 2017 at 7:35 AM, Raghav <ra...@gmail.com> wrote:
>
> > Due to the nature of code, I have to open a connection to a different
> Kafka
> > broker each time, and send one message. We have several Kafka brokers. So
> > my client log is full with the following logs. What log settings should I
> > use in log4j just for Kafka producer logs ?
> >
> >
> > 17/09/07 04:44:04 INFO producer.ProducerConfig:180 ProducerConfig values:
> >         acks = all
> >         batch.size = 16384
> >         block.on.buffer.full = false
> >         bootstrap.servers = [10.10.10.5:]
> >         buffer.memory = 33554432
> >         client.id =
> >         compression.type = none
> >         connections.max.idle.ms = 540000
> >         interceptor.classes = null
> >         key.serializer = class
> > org.apache.kafka.common.serialization.StringSerializer
> >         linger.ms = 1
> >         max.block.ms = 5000
> >         max.in.flight.requests.per.connection = 5
> >         max.request.size = 1048576
> >         metadata.fetch.timeout.ms = 60000
> >         metadata.max.age.ms = 300000
> >         metric.reporters = []
> >         metrics.num.samples = 2
> >         metrics.sample.window.ms = 30000
> >         partitioner.class = class
> > org.apache.kafka.clients.producer.internals.DefaultPartitioner
> >         receive.buffer.bytes = 32768
> >         reconnect.backoff.ms = 50
> >         request.timeout.ms = 5000
> >         retries = 0
> >         retry.backoff.ms = 100
> >         sasl.kerberos.kinit.cmd = /usr/bin/kinit
> >         sasl.kerberos.min.time.before.relogin = 60000
> >         sasl.kerberos.service.name = null
> >         sasl.kerberos.ticket.renew.jitter = 0.05
> >         sasl.kerberos.ticket.renew.window.factor = 0.8
> >         sasl.mechanism = GSSAPI
> >         security.protocol = PLAINTEXT
> >         send.buffer.bytes = 131072
> >         ssl.cipher.suites = null
> >         ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> >         ssl.endpoint.identification.algorithm = null
> >         ssl.key.password = null
> >         ssl.keymanager.algorithm = SunX509
> >         ssl.keystore.location = null
> >         ssl.keystore.password = null
> >         ssl.keystore.type = JKS
> >         ssl.protocol = TLS
> >         ssl.provider = null
> >         ssl.secure.random.implementation = null
> >         ssl.trustmanager.algorithm = PKIX
> >         ssl.truststore.location = null
> >         ssl.truststore.password = null
> >         ssl.truststore.type = JKS
> >         timeout.ms = 30000
> >         value.serializer = class
> > org.apache.kafka.common.serialization.StringSerializer
> >
> > On Wed, Sep 6, 2017 at 9:37 PM, Jaikiran Pai <ja...@gmail.com>
> > wrote:
> >
> > > Can you post the exact log messages that you are seeing?
> > >
> > > -Jaikiran
> > >
> > >
> > >
> > > On 07/09/17 7:55 AM, Raghav wrote:
> > >
> > >> Hi
> > >>
> > >> My Java code produces Kafka config overtime it does a send which makes
> > log
> > >> very very verbose.
> > >>
> > >> How can I reduce the Kafka client (producer) logging in my java code ?
> > >>
> > >> Thanks for your help.
> > >>
> > >>
> > >
> >
> >
> > --
> > Raghav
> >
>



-- 
Raghav

Re: Reduce Kafka Client logging

Posted by Viktor Somogyi <vi...@gmail.com>.
Hi Raghav,

I think it is enough to raise the logging level
of org.apache.kafka.clients.producer.ProducerConfig to WARN in log4j.
Also I'd like to mention that if possible, don't recreate the Kafka
producer each time. The protocol is designed for long-living connections
and recreating the connection each time puts pressure on the TCP layer (the
connection is expensive) and also on Kafka as well which may result in
broker failures (typically exceeding the maximum allowed number of file
descriptors).

HTH,
Viktor

On Thu, Sep 7, 2017 at 7:35 AM, Raghav <ra...@gmail.com> wrote:

> Due to the nature of code, I have to open a connection to a different Kafka
> broker each time, and send one message. We have several Kafka brokers. So
> my client log is full with the following logs. What log settings should I
> use in log4j just for Kafka producer logs ?
>
>
> 17/09/07 04:44:04 INFO producer.ProducerConfig:180 ProducerConfig values:
>         acks = all
>         batch.size = 16384
>         block.on.buffer.full = false
>         bootstrap.servers = [10.10.10.5:]
>         buffer.memory = 33554432
>         client.id =
>         compression.type = none
>         connections.max.idle.ms = 540000
>         interceptor.classes = null
>         key.serializer = class
> org.apache.kafka.common.serialization.StringSerializer
>         linger.ms = 1
>         max.block.ms = 5000
>         max.in.flight.requests.per.connection = 5
>         max.request.size = 1048576
>         metadata.fetch.timeout.ms = 60000
>         metadata.max.age.ms = 300000
>         metric.reporters = []
>         metrics.num.samples = 2
>         metrics.sample.window.ms = 30000
>         partitioner.class = class
> org.apache.kafka.clients.producer.internals.DefaultPartitioner
>         receive.buffer.bytes = 32768
>         reconnect.backoff.ms = 50
>         request.timeout.ms = 5000
>         retries = 0
>         retry.backoff.ms = 100
>         sasl.kerberos.kinit.cmd = /usr/bin/kinit
>         sasl.kerberos.min.time.before.relogin = 60000
>         sasl.kerberos.service.name = null
>         sasl.kerberos.ticket.renew.jitter = 0.05
>         sasl.kerberos.ticket.renew.window.factor = 0.8
>         sasl.mechanism = GSSAPI
>         security.protocol = PLAINTEXT
>         send.buffer.bytes = 131072
>         ssl.cipher.suites = null
>         ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>         ssl.endpoint.identification.algorithm = null
>         ssl.key.password = null
>         ssl.keymanager.algorithm = SunX509
>         ssl.keystore.location = null
>         ssl.keystore.password = null
>         ssl.keystore.type = JKS
>         ssl.protocol = TLS
>         ssl.provider = null
>         ssl.secure.random.implementation = null
>         ssl.trustmanager.algorithm = PKIX
>         ssl.truststore.location = null
>         ssl.truststore.password = null
>         ssl.truststore.type = JKS
>         timeout.ms = 30000
>         value.serializer = class
> org.apache.kafka.common.serialization.StringSerializer
>
> On Wed, Sep 6, 2017 at 9:37 PM, Jaikiran Pai <ja...@gmail.com>
> wrote:
>
> > Can you post the exact log messages that you are seeing?
> >
> > -Jaikiran
> >
> >
> >
> > On 07/09/17 7:55 AM, Raghav wrote:
> >
> >> Hi
> >>
> >> My Java code produces Kafka config overtime it does a send which makes
> log
> >> very very verbose.
> >>
> >> How can I reduce the Kafka client (producer) logging in my java code ?
> >>
> >> Thanks for your help.
> >>
> >>
> >
>
>
> --
> Raghav
>

Re: Reduce Kafka Client logging

Posted by Raghav <ra...@gmail.com>.
Due to the nature of code, I have to open a connection to a different Kafka
broker each time, and send one message. We have several Kafka brokers. So
my client log is full with the following logs. What log settings should I
use in log4j just for Kafka producer logs ?


17/09/07 04:44:04 INFO producer.ProducerConfig:180 ProducerConfig values:
        acks = all
        batch.size = 16384
        block.on.buffer.full = false
        bootstrap.servers = [10.10.10.5:]
        buffer.memory = 33554432
        client.id =
        compression.type = none
        connections.max.idle.ms = 540000
        interceptor.classes = null
        key.serializer = class
org.apache.kafka.common.serialization.StringSerializer
        linger.ms = 1
        max.block.ms = 5000
        max.in.flight.requests.per.connection = 5
        max.request.size = 1048576
        metadata.fetch.timeout.ms = 60000
        metadata.max.age.ms = 300000
        metric.reporters = []
        metrics.num.samples = 2
        metrics.sample.window.ms = 30000
        partitioner.class = class
org.apache.kafka.clients.producer.internals.DefaultPartitioner
        receive.buffer.bytes = 32768
        reconnect.backoff.ms = 50
        request.timeout.ms = 5000
        retries = 0
        retry.backoff.ms = 100
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.ticket.renew.window.factor = 0.8
        sasl.mechanism = GSSAPI
        security.protocol = PLAINTEXT
        send.buffer.bytes = 131072
        ssl.cipher.suites = null
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        ssl.endpoint.identification.algorithm = null
        ssl.key.password = null
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.location = null
        ssl.keystore.password = null
        ssl.keystore.type = JKS
        ssl.protocol = TLS
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.location = null
        ssl.truststore.password = null
        ssl.truststore.type = JKS
        timeout.ms = 30000
        value.serializer = class
org.apache.kafka.common.serialization.StringSerializer

On Wed, Sep 6, 2017 at 9:37 PM, Jaikiran Pai <ja...@gmail.com>
wrote:

> Can you post the exact log messages that you are seeing?
>
> -Jaikiran
>
>
>
> On 07/09/17 7:55 AM, Raghav wrote:
>
>> Hi
>>
>> My Java code produces Kafka config overtime it does a send which makes log
>> very very verbose.
>>
>> How can I reduce the Kafka client (producer) logging in my java code ?
>>
>> Thanks for your help.
>>
>>
>


-- 
Raghav

Re: Reduce Kafka Client logging

Posted by Jaikiran Pai <ja...@gmail.com>.
Can you post the exact log messages that you are seeing?

-Jaikiran


On 07/09/17 7:55 AM, Raghav wrote:
> Hi
>
> My Java code produces Kafka config overtime it does a send which makes log
> very very verbose.
>
> How can I reduce the Kafka client (producer) logging in my java code ?
>
> Thanks for your help.
>