You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Yang <te...@gmail.com> on 2013/09/03 19:09:50 UTC

default producer to retro-fit existing log files collection process?

in many setups we have production web server logs rotated on local disks,
and then collected using some sort of scp processes.

I guess the ideal way to use kafka is to write a module for tomcat and
catches the request , send through the kafka api. but is there a "quick and
dirty" producer included from kafka  to just read the existing rotated logs
and send through kafka API? this would avoid having to touch the existing
java code

thanks
Yang

Re: default producer to retro-fit existing log files collection process?

Posted by Benjamin Black <b...@b3k.us>.
commons-logging has a log4j logger, so perhaps you just need to use it and
the log4j-kafka appender to achieve your goal?


On Tue, Sep 3, 2013 at 2:08 PM, Maxime Petazzoni
<Ma...@turn.com>wrote:

> Tomcat uses commons-logging for logging. You might be able to write an
> adapter towards Kafka, in a similar way as the log4j-kafka appender. I
> think this would be cleaner than writing something Tomcat-specific that
> intercepts your requests and logs them through Kafka.
>
> /Max
> --
> Maxime Petazzoni
> Sr. Platform Engineer
> m 408.310.0595
> www.turn.com
>
> ________________________________________
> From: Yang [teddyyyy123@gmail.com]
> Sent: Tuesday, September 03, 2013 10:09 AM
> To: users@kafka.apache.org
> Subject: default producer to retro-fit existing log files collection
> process?
>
> in many setups we have production web server logs rotated on local disks,
> and then collected using some sort of scp processes.
>
> I guess the ideal way to use kafka is to write a module for tomcat and
> catches the request , send through the kafka api. but is there a "quick and
> dirty" producer included from kafka  to just read the existing rotated logs
> and send through kafka API? this would avoid having to touch the existing
> java code
>
> thanks
> Yang
>

Re: default producer to retro-fit existing log files collection process?

Posted by Jay Kreps <ja...@gmail.com>.
As Neha says the best thing we currently provide is console producer.
Providing a more flexible framework specifically targeted at log slurping
would be a cool open source project.

-Jay


On Wed, Sep 4, 2013 at 7:34 AM, Neha Narkhede <ne...@gmail.com>wrote:

> Quick and dirty solution would be to somehow tail the logs and use console
> producer to send the data to kafka.
>
> Thanks,
> Neha
> On Sep 3, 2013 2:09 PM, "Maxime Petazzoni" <Ma...@turn.com>
> wrote:
>
> > Tomcat uses commons-logging for logging. You might be able to write an
> > adapter towards Kafka, in a similar way as the log4j-kafka appender. I
> > think this would be cleaner than writing something Tomcat-specific that
> > intercepts your requests and logs them through Kafka.
> >
> > /Max
> > --
> > Maxime Petazzoni
> > Sr. Platform Engineer
> > m 408.310.0595
> > www.turn.com
> >
> > ________________________________________
> > From: Yang [teddyyyy123@gmail.com]
> > Sent: Tuesday, September 03, 2013 10:09 AM
> > To: users@kafka.apache.org
> > Subject: default producer to retro-fit existing log files collection
> > process?
> >
> > in many setups we have production web server logs rotated on local disks,
> > and then collected using some sort of scp processes.
> >
> > I guess the ideal way to use kafka is to write a module for tomcat and
> > catches the request , send through the kafka api. but is there a "quick
> and
> > dirty" producer included from kafka  to just read the existing rotated
> logs
> > and send through kafka API? this would avoid having to touch the existing
> > java code
> >
> > thanks
> > Yang
> >
>

RE: default producer to retro-fit existing log files collection process?

Posted by Neha Narkhede <ne...@gmail.com>.
Quick and dirty solution would be to somehow tail the logs and use console
producer to send the data to kafka.

Thanks,
Neha
On Sep 3, 2013 2:09 PM, "Maxime Petazzoni" <Ma...@turn.com>
wrote:

> Tomcat uses commons-logging for logging. You might be able to write an
> adapter towards Kafka, in a similar way as the log4j-kafka appender. I
> think this would be cleaner than writing something Tomcat-specific that
> intercepts your requests and logs them through Kafka.
>
> /Max
> --
> Maxime Petazzoni
> Sr. Platform Engineer
> m 408.310.0595
> www.turn.com
>
> ________________________________________
> From: Yang [teddyyyy123@gmail.com]
> Sent: Tuesday, September 03, 2013 10:09 AM
> To: users@kafka.apache.org
> Subject: default producer to retro-fit existing log files collection
> process?
>
> in many setups we have production web server logs rotated on local disks,
> and then collected using some sort of scp processes.
>
> I guess the ideal way to use kafka is to write a module for tomcat and
> catches the request , send through the kafka api. but is there a "quick and
> dirty" producer included from kafka  to just read the existing rotated logs
> and send through kafka API? this would avoid having to touch the existing
> java code
>
> thanks
> Yang
>

RE: default producer to retro-fit existing log files collection process?

Posted by Maxime Petazzoni <Ma...@turn.com>.
Tomcat uses commons-logging for logging. You might be able to write an adapter towards Kafka, in a similar way as the log4j-kafka appender. I think this would be cleaner than writing something Tomcat-specific that intercepts your requests and logs them through Kafka.

/Max
--
Maxime Petazzoni
Sr. Platform Engineer
m 408.310.0595
www.turn.com

________________________________________
From: Yang [teddyyyy123@gmail.com]
Sent: Tuesday, September 03, 2013 10:09 AM
To: users@kafka.apache.org
Subject: default producer to retro-fit existing log files collection process?

in many setups we have production web server logs rotated on local disks,
and then collected using some sort of scp processes.

I guess the ideal way to use kafka is to write a module for tomcat and
catches the request , send through the kafka api. but is there a "quick and
dirty" producer included from kafka  to just read the existing rotated logs
and send through kafka API? this would avoid having to touch the existing
java code

thanks
Yang