You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by navneet sharma <na...@gmail.com> on 2012/04/18 11:00:50 UTC

Dynamic broker discovery not working for me

Hi All,

I was trying the following scenario:
1) Start zookeeper
2) Start server 1 ie broker 1. It will connect with zookeeper
3) Start Producer stand alone java program. This will push string messages
read from a file line-by-line. Before pushing it will divide the messages
into 3 topics.
4) Start Consumer stand alone java program. This is a set of 3 consumers
each dedicated to 3 different topics.

After this, :
5) I started another broker server on a different port.
6) But this was never discovered by the producer and it kept on pushing
everything to the first broker only

I could see this on producer logs:::::
13:42:43,227 [main] DEBUG kafka.producer.Producer  - Getting the number of
broker partitions registered for topic: cartTopic
13:42:43,227 [main] DEBUG kafka.producer.Producer  - Broker partitions
registered for topic: cartTopic = List(1-0)
13:42:43,227 [main] DEBUG kafka.producer.Producer  - Sending message to
broker 127.0.1.1:9095 on partition 0
13:42:43,227 [main] DEBUG kafka.producer.ProducerPool  - Fetching sync
producer for broker id: 1
13:42:43,227 [main] DEBUG kafka.message.ByteBufferMessageSet  - makeNext()
in deepIterator: innerDone = true
13:42:43,228 [main] DEBUG kafka.message.ByteBufferMessageSet  - Message is
uncompressed. Valid byte count = 0
13:42:43,228 [main] DEBUG kafka.message.ByteBufferMessageSet  - makeNext()
in deepIterator: innerDone = true
13:42:43,228 [main] DEBUG kafka.producer.ProducerPool  - Sending message to
broker 1

Infact i suspected that producer may sync up with zookeeper only at run
time. So, with zookeeper and both brokers up, i re-ran producer and
consumer but it gave me same result.

Am i missing anything?

Thanks,
Navneet Sharma

RE: Dynamic broker discovery not working for me

Posted by "Bateman, Matt" <ma...@ebay.com>.
I actually filed this in JIRA sometime back: https://issues.apache.org/jira/browse/KAFKA-278



-----Original Message-----
From: navneet sharma [mailto:navneetsharma0505@gmail.com] 
Sent: Thursday, April 19, 2012 12:17 AM
To: kafka-users@incubator.apache.org
Subject: Re: Dynamic broker discovery not working for me

I am using zookeeper based discover only (and not static list)

Looks like there are 2 workarounds:
1) Either create topic directory and restart broker
2) Or close all the brokers except the new broker. Then requests start flowing to the latest. Then start all of them.

Best way is to get a fix in next release.

On Wed, Apr 18, 2012 at 10:10 PM, Jun Rao <ju...@gmail.com> wrote:

> Navneet,
>
> Yes, this is a bug in the producer logic. Basically, it only 
> bootstraps partitions on the 1st new broker. This problem will be 
> fixed in 0.8 since partitions are no longer tied to each physical 
> brokers. For now, the quick solution is to create a topic directory on 
> disk for the new topic on each of the brokers that you want the topic 
> to reside (and restart those brokers).
>
> Thanks,
>
> Jun
>
> On Wed, Apr 18, 2012 at 2:00 AM, navneet sharma < 
> navneetsharma0505@gmail.com
> > wrote:
>
> > Hi All,
> >
> > I was trying the following scenario:
> > 1) Start zookeeper
> > 2) Start server 1 ie broker 1. It will connect with zookeeper
> > 3) Start Producer stand alone java program. This will push string
> messages
> > read from a file line-by-line. Before pushing it will divide the 
> > messages into 3 topics.
> > 4) Start Consumer stand alone java program. This is a set of 3 
> > consumers each dedicated to 3 different topics.
> >
> > After this, :
> > 5) I started another broker server on a different port.
> > 6) But this was never discovered by the producer and it kept on 
> > pushing everything to the first broker only
> >
> > I could see this on producer logs:::::
> > 13:42:43,227 [main] DEBUG kafka.producer.Producer  - Getting the 
> > number
> of
> > broker partitions registered for topic: cartTopic
> > 13:42:43,227 [main] DEBUG kafka.producer.Producer  - Broker 
> > partitions registered for topic: cartTopic = List(1-0)
> > 13:42:43,227 [main] DEBUG kafka.producer.Producer  - Sending message 
> > to broker 127.0.1.1:9095 on partition 0
> > 13:42:43,227 [main] DEBUG kafka.producer.ProducerPool  - Fetching 
> > sync producer for broker id: 1
> > 13:42:43,227 [main] DEBUG kafka.message.ByteBufferMessageSet  -
> makeNext()
> > in deepIterator: innerDone = true
> > 13:42:43,228 [main] DEBUG kafka.message.ByteBufferMessageSet  - 
> > Message
> is
> > uncompressed. Valid byte count = 0
> > 13:42:43,228 [main] DEBUG kafka.message.ByteBufferMessageSet  -
> makeNext()
> > in deepIterator: innerDone = true
> > 13:42:43,228 [main] DEBUG kafka.producer.ProducerPool  - Sending 
> > message
> to
> > broker 1
> >
> > Infact i suspected that producer may sync up with zookeeper only at 
> > run time. So, with zookeeper and both brokers up, i re-ran producer 
> > and consumer but it gave me same result.
> >
> > Am i missing anything?
> >
> > Thanks,
> > Navneet Sharma
> >
>

Re: Dynamic broker discovery not working for me

Posted by navneet sharma <na...@gmail.com>.
I am using zookeeper based discover only (and not static list)

Looks like there are 2 workarounds:
1) Either create topic directory and restart broker
2) Or close all the brokers except the new broker. Then requests start
flowing to the latest. Then start all of them.

Best way is to get a fix in next release.

On Wed, Apr 18, 2012 at 10:10 PM, Jun Rao <ju...@gmail.com> wrote:

> Navneet,
>
> Yes, this is a bug in the producer logic. Basically, it only bootstraps
> partitions on the 1st new broker. This problem will be fixed in 0.8 since
> partitions are no longer tied to each physical brokers. For now, the quick
> solution is to create a topic directory on disk for the new topic on each
> of the brokers that you want the topic to reside (and restart those
> brokers).
>
> Thanks,
>
> Jun
>
> On Wed, Apr 18, 2012 at 2:00 AM, navneet sharma <
> navneetsharma0505@gmail.com
> > wrote:
>
> > Hi All,
> >
> > I was trying the following scenario:
> > 1) Start zookeeper
> > 2) Start server 1 ie broker 1. It will connect with zookeeper
> > 3) Start Producer stand alone java program. This will push string
> messages
> > read from a file line-by-line. Before pushing it will divide the messages
> > into 3 topics.
> > 4) Start Consumer stand alone java program. This is a set of 3 consumers
> > each dedicated to 3 different topics.
> >
> > After this, :
> > 5) I started another broker server on a different port.
> > 6) But this was never discovered by the producer and it kept on pushing
> > everything to the first broker only
> >
> > I could see this on producer logs:::::
> > 13:42:43,227 [main] DEBUG kafka.producer.Producer  - Getting the number
> of
> > broker partitions registered for topic: cartTopic
> > 13:42:43,227 [main] DEBUG kafka.producer.Producer  - Broker partitions
> > registered for topic: cartTopic = List(1-0)
> > 13:42:43,227 [main] DEBUG kafka.producer.Producer  - Sending message to
> > broker 127.0.1.1:9095 on partition 0
> > 13:42:43,227 [main] DEBUG kafka.producer.ProducerPool  - Fetching sync
> > producer for broker id: 1
> > 13:42:43,227 [main] DEBUG kafka.message.ByteBufferMessageSet  -
> makeNext()
> > in deepIterator: innerDone = true
> > 13:42:43,228 [main] DEBUG kafka.message.ByteBufferMessageSet  - Message
> is
> > uncompressed. Valid byte count = 0
> > 13:42:43,228 [main] DEBUG kafka.message.ByteBufferMessageSet  -
> makeNext()
> > in deepIterator: innerDone = true
> > 13:42:43,228 [main] DEBUG kafka.producer.ProducerPool  - Sending message
> to
> > broker 1
> >
> > Infact i suspected that producer may sync up with zookeeper only at run
> > time. So, with zookeeper and both brokers up, i re-ran producer and
> > consumer but it gave me same result.
> >
> > Am i missing anything?
> >
> > Thanks,
> > Navneet Sharma
> >
>

Re: Dynamic broker discovery not working for me

Posted by Neha Narkhede <ne...@gmail.com>.
Navneet,

I suspect you may be configuring the Producer to use broker.list to
connect to the Kafka brokers. If that is true, then the producer has
no ability to detect new brokers until you change the broker.list
value to include the new brokers, and restart the Producer. If you are
using the zk.connect option on the Producer, then it should be able to
send messages at least to partition 0 on the new broker.

Thanks,
Neha

On Wed, Apr 18, 2012 at 9:40 AM, Jun Rao <ju...@gmail.com> wrote:
> Navneet,
>
> Yes, this is a bug in the producer logic. Basically, it only bootstraps
> partitions on the 1st new broker. This problem will be fixed in 0.8 since
> partitions are no longer tied to each physical brokers. For now, the quick
> solution is to create a topic directory on disk for the new topic on each
> of the brokers that you want the topic to reside (and restart those
> brokers).
>
> Thanks,
>
> Jun
>
> On Wed, Apr 18, 2012 at 2:00 AM, navneet sharma <navneetsharma0505@gmail.com
>> wrote:
>
>> Hi All,
>>
>> I was trying the following scenario:
>> 1) Start zookeeper
>> 2) Start server 1 ie broker 1. It will connect with zookeeper
>> 3) Start Producer stand alone java program. This will push string messages
>> read from a file line-by-line. Before pushing it will divide the messages
>> into 3 topics.
>> 4) Start Consumer stand alone java program. This is a set of 3 consumers
>> each dedicated to 3 different topics.
>>
>> After this, :
>> 5) I started another broker server on a different port.
>> 6) But this was never discovered by the producer and it kept on pushing
>> everything to the first broker only
>>
>> I could see this on producer logs:::::
>> 13:42:43,227 [main] DEBUG kafka.producer.Producer  - Getting the number of
>> broker partitions registered for topic: cartTopic
>> 13:42:43,227 [main] DEBUG kafka.producer.Producer  - Broker partitions
>> registered for topic: cartTopic = List(1-0)
>> 13:42:43,227 [main] DEBUG kafka.producer.Producer  - Sending message to
>> broker 127.0.1.1:9095 on partition 0
>> 13:42:43,227 [main] DEBUG kafka.producer.ProducerPool  - Fetching sync
>> producer for broker id: 1
>> 13:42:43,227 [main] DEBUG kafka.message.ByteBufferMessageSet  - makeNext()
>> in deepIterator: innerDone = true
>> 13:42:43,228 [main] DEBUG kafka.message.ByteBufferMessageSet  - Message is
>> uncompressed. Valid byte count = 0
>> 13:42:43,228 [main] DEBUG kafka.message.ByteBufferMessageSet  - makeNext()
>> in deepIterator: innerDone = true
>> 13:42:43,228 [main] DEBUG kafka.producer.ProducerPool  - Sending message to
>> broker 1
>>
>> Infact i suspected that producer may sync up with zookeeper only at run
>> time. So, with zookeeper and both brokers up, i re-ran producer and
>> consumer but it gave me same result.
>>
>> Am i missing anything?
>>
>> Thanks,
>> Navneet Sharma
>>

Re: Dynamic broker discovery not working for me

Posted by Jun Rao <ju...@gmail.com>.
Navneet,

Yes, this is a bug in the producer logic. Basically, it only bootstraps
partitions on the 1st new broker. This problem will be fixed in 0.8 since
partitions are no longer tied to each physical brokers. For now, the quick
solution is to create a topic directory on disk for the new topic on each
of the brokers that you want the topic to reside (and restart those
brokers).

Thanks,

Jun

On Wed, Apr 18, 2012 at 2:00 AM, navneet sharma <navneetsharma0505@gmail.com
> wrote:

> Hi All,
>
> I was trying the following scenario:
> 1) Start zookeeper
> 2) Start server 1 ie broker 1. It will connect with zookeeper
> 3) Start Producer stand alone java program. This will push string messages
> read from a file line-by-line. Before pushing it will divide the messages
> into 3 topics.
> 4) Start Consumer stand alone java program. This is a set of 3 consumers
> each dedicated to 3 different topics.
>
> After this, :
> 5) I started another broker server on a different port.
> 6) But this was never discovered by the producer and it kept on pushing
> everything to the first broker only
>
> I could see this on producer logs:::::
> 13:42:43,227 [main] DEBUG kafka.producer.Producer  - Getting the number of
> broker partitions registered for topic: cartTopic
> 13:42:43,227 [main] DEBUG kafka.producer.Producer  - Broker partitions
> registered for topic: cartTopic = List(1-0)
> 13:42:43,227 [main] DEBUG kafka.producer.Producer  - Sending message to
> broker 127.0.1.1:9095 on partition 0
> 13:42:43,227 [main] DEBUG kafka.producer.ProducerPool  - Fetching sync
> producer for broker id: 1
> 13:42:43,227 [main] DEBUG kafka.message.ByteBufferMessageSet  - makeNext()
> in deepIterator: innerDone = true
> 13:42:43,228 [main] DEBUG kafka.message.ByteBufferMessageSet  - Message is
> uncompressed. Valid byte count = 0
> 13:42:43,228 [main] DEBUG kafka.message.ByteBufferMessageSet  - makeNext()
> in deepIterator: innerDone = true
> 13:42:43,228 [main] DEBUG kafka.producer.ProducerPool  - Sending message to
> broker 1
>
> Infact i suspected that producer may sync up with zookeeper only at run
> time. So, with zookeeper and both brokers up, i re-ran producer and
> consumer but it gave me same result.
>
> Am i missing anything?
>
> Thanks,
> Navneet Sharma
>