You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by David Arthur <mu...@gmail.com> on 2012/07/31 22:09:16 UTC

Log4jAppender backoff if server is down

Greetings all, 

I'm using the KafkaLog4jAppender with Solr and ran into an interesting issue recently. The disk filled up on my Kafka broker (just a single broker, this is a dev environment) and Solr slowed down to a near halt. My best estimation is that each log4j log message created was incurring quite a bit of overhead dealing with exceptions coming back from the Kafka broker. 

So I'm wondering, would it make sense to implement some back off strategy for this client if it starts getting exceptions from the server? Alternatively, could maybe the Kafka broker mark it self as "down" in ZooKeeper if it gets into certain situations (like disk full). I guess this really could apply to any client, not just the log4j appender.

Thanks!
-David

Re: Log4jAppender backoff if server is down

Posted by Neha Narkhede <ne...@gmail.com>.
David,

>From the logs, seems like the box you ran a Kafka server on, ran out
of disk space. That's why the Kafka server starts up but shuts itself
down immediately. Please can you try starting Kafka on a box with
enough disk space ?

Thanks,
Neha

On Tue, Jul 31, 2012 at 5:08 PM, David Arthur <mu...@gmail.com> wrote:
> Here are some log snippets:
>
> Kafka server logs: https://gist.github.com/c440ada8daa629e337e2
> Solr logs: https://gist.github.com/42624c901fc7967fd137
>
> In this case, I am sending all the "org.apache.solr" logs to Kafka, so each document update in Solr produces a log message. Each update to Solr produced an exception like this which caused things to slow way down.
>
> My earlier statement about Kafka being up but unable to write logs was incorrect. During this, it seems Kafka was simply down (our supervisor that restarts Kafka gave up after a few tries). So in the case that Kafka is down, what should the client behavior be like?
>
> Ideally, to me, the client could know about what brokers are available through ZK watches and just refuse to attempt a send/produce if no one is available.
>
> What do you guys think? I'm not saying this is necessarily a Kafka issue, I'm just not sure what's the best thing to do here.
>
> Cheers
> -David
>
> On Jul 31, 2012, at 5:48 PM, Neha Narkhede wrote:
>
>> David,
>>
>> Would you mind sending around the error stack traces ? That will help
>> determine the right fix.
>>
>> Thanks,
>> Neha
>>
>> On Tue, Jul 31, 2012 at 1:09 PM, David Arthur <mu...@gmail.com> wrote:
>>> Greetings all,
>>>
>>> I'm using the KafkaLog4jAppender with Solr and ran into an interesting issue recently. The disk filled up on my Kafka broker (just a single broker, this is a dev environment) and Solr slowed down to a near halt. My best estimation is that each log4j log message created was incurring quite a bit of overhead dealing with exceptions coming back from the Kafka broker.
>>>
>>> So I'm wondering, would it make sense to implement some back off strategy for this client if it starts getting exceptions from the server? Alternatively, could maybe the Kafka broker mark it self as "down" in ZooKeeper if it gets into certain situations (like disk full). I guess this really could apply to any client, not just the log4j appender.
>>>
>>> Thanks!
>>> -David
>

Re: Log4jAppender backoff if server is down

Posted by David Arthur <mu...@gmail.com>.
Here are some log snippets:

Kafka server logs: https://gist.github.com/c440ada8daa629e337e2
Solr logs: https://gist.github.com/42624c901fc7967fd137

In this case, I am sending all the "org.apache.solr" logs to Kafka, so each document update in Solr produces a log message. Each update to Solr produced an exception like this which caused things to slow way down.

My earlier statement about Kafka being up but unable to write logs was incorrect. During this, it seems Kafka was simply down (our supervisor that restarts Kafka gave up after a few tries). So in the case that Kafka is down, what should the client behavior be like?

Ideally, to me, the client could know about what brokers are available through ZK watches and just refuse to attempt a send/produce if no one is available.

What do you guys think? I'm not saying this is necessarily a Kafka issue, I'm just not sure what's the best thing to do here.

Cheers
-David

On Jul 31, 2012, at 5:48 PM, Neha Narkhede wrote:

> David,
> 
> Would you mind sending around the error stack traces ? That will help
> determine the right fix.
> 
> Thanks,
> Neha
> 
> On Tue, Jul 31, 2012 at 1:09 PM, David Arthur <mu...@gmail.com> wrote:
>> Greetings all,
>> 
>> I'm using the KafkaLog4jAppender with Solr and ran into an interesting issue recently. The disk filled up on my Kafka broker (just a single broker, this is a dev environment) and Solr slowed down to a near halt. My best estimation is that each log4j log message created was incurring quite a bit of overhead dealing with exceptions coming back from the Kafka broker.
>> 
>> So I'm wondering, would it make sense to implement some back off strategy for this client if it starts getting exceptions from the server? Alternatively, could maybe the Kafka broker mark it self as "down" in ZooKeeper if it gets into certain situations (like disk full). I guess this really could apply to any client, not just the log4j appender.
>> 
>> Thanks!
>> -David


Re: Log4jAppender backoff if server is down

Posted by Neha Narkhede <ne...@gmail.com>.
David,

Would you mind sending around the error stack traces ? That will help
determine the right fix.

Thanks,
Neha

On Tue, Jul 31, 2012 at 1:09 PM, David Arthur <mu...@gmail.com> wrote:
> Greetings all,
>
> I'm using the KafkaLog4jAppender with Solr and ran into an interesting issue recently. The disk filled up on my Kafka broker (just a single broker, this is a dev environment) and Solr slowed down to a near halt. My best estimation is that each log4j log message created was incurring quite a bit of overhead dealing with exceptions coming back from the Kafka broker.
>
> So I'm wondering, would it make sense to implement some back off strategy for this client if it starts getting exceptions from the server? Alternatively, could maybe the Kafka broker mark it self as "down" in ZooKeeper if it gets into certain situations (like disk full). I guess this really could apply to any client, not just the log4j appender.
>
> Thanks!
> -David