You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Claude Mamo <cl...@gmail.com> on 2014/06/03 03:03:56 UTC

Re: mBean to monitor message per partitions in topic

Or if you want something more user-friendly you can try out
https://github.com/quantifind/KafkaOffsetMonitor

Claude


On Sat, May 31, 2014 at 2:40 AM, Jun Rao <ju...@gmail.com> wrote:

> Ok, if you want to track this on the broker, you can use the consumer
> offset check tool that Guozhang mentioned.
>
> Thanks,
>
> Jun
>
>
> On Fri, May 30, 2014 at 7:44 AM, Рябков Алексей Николаевич <
> a.ryabkov@ntc-vulkan.ru> wrote:
>
> > Thanks ... but I think its really not good for my case...
> >
> > This is because I must connect to all consumers for such stat...And I may
> > have a lot of consumers... from 5k to 15k...
> >
> > So why not to add getNumFetchRequests &  getNumProduceRequests in
> > kafka.BrokerTopicStat.[topic]?
> > (I can use getBytesIn and getBytesOut ... but my message doesn’t have fix
> > size...)
> >
> > I try to find the solution to create "сlever" load-balancer via
> > partitioner.class...
> >
> > Thanks in advance, Aleksey Ryabkov
> >
> >
> > -----Исходное сообщение-----
> > От: Jun Rao [mailto:junrao@gmail.com]
> > Отправлено: Friday, May 30, 2014 7:34 AM
> > Кому: users@kafka.apache.org
> > Тема: Re: mBean to monitor message per partitions in topic
> >
> > If you open up jmx (e.g. jconsole) in a consumer instance, you will see
> > the jmx of name *-ConsumerLag.
> >
> > Thanks,
> >
> > Jun
> >
> >
> > On Thu, May 29, 2014 at 8:53 AM, Рябков Алексей Николаевич <
> > a.ryabkov@ntc-vulkan.ru> wrote:
> >
> > > Do you mean getOffsetLag ? (
> > > https://cwiki.apache.org/confluence/display/KAFKA/Operations)
> > >
> > > But its all about offset not message...
> > >
> > > So for example to create "real" load balance between broker I can:
> > >
> > > 1. Calculate average unread message in topic per broker:
> > >        - Sall=sum(kafka.BrokerAllTopicStat.[topic]. getMessagesIn)
> > > -total send messages in topic for all broker
> > >        - Rall=sum (kafka.ConsumerTopicStat.[topic].
> > > getMessagesPerTopic -  -total read messages in topic for all consumers
> > >        - (Sall-Rall)/broker number - average unread messages in topic
> > > per broker 2. Get current number of unread messages in topic for
> > > broker (little hack
> > > here)
> > >       for example if we  locate N consumers per broker and each
> > > consumer must read only from 1 partition than we can :
> > >        - Sb=kafka.BrokerAllTopicStat.[topic]. getMessagesIn) -total
> > > send messages in topic for broker B
> > >        - Rb=sum for all N consumers in broker b
> > > (kafka.ConsumerTopicStat.[topic]. getMessagesPerTopic) - total unread
> > > messages in topic for N consumers on broker b
> > >         - (Sb-Rb) -unread messages in topic for broker 3. Use
> > > partitioner to load balance between brokers ...
> > >         - compare average unread messages in topic per broker with
> > > current unread messages in topic for broker we can create more clever
> > > load balancer...
> > >
> > >
> > > But what can you tell about performance?... How fast I can get
> > > monitoring stat?  Could you give me some advice for optimization?
> > >
> > >
> > > Thanks, Aleksey Ryabkov
> > >
> > >
> > > -----Исходное сообщение-----
> > > От: Jun Rao [mailto:junrao@gmail.com]
> > > Отправлено: Thursday, May 29, 2014 8:34 AM
> > > Кому: users@kafka.apache.org
> > > Тема: Re: mBean to monitor message per partitions in topic
> > >
> > > There is a per-partition jmx (*-ConsumerLag) in the consumer that
> > > reports unconsumed messages per partition.
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > >
> > > On Wed, May 28, 2014 at 8:13 AM, Рябков Алексей Николаевич <
> > > a.ryabkov@ntc-vulkan.ru> wrote:
> > >
> > > > Hello!
> > > >
> > > > How can I get information about unfetched message per partition in
> > topic?
> > > >        I wish to use such information to create my custom
> > > > partitioner.class to balance messages between partitions
> > > >
> > > > With best regards, Aleksey Ryabkov
> > > >
> > > >
> > >
> >
>