You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Jason Rosenberg <jb...@squareup.com> on 2013/05/09 07:15:51 UTC

Can't connect to a server if not enough partitions

With 0.8.0, I'm seeing that an initial metadata request fails, if the
number of running brokers is fewer than the configured replication factor:

877 [kafka-request-handler-0] ERROR kafka.server.KafkaApis  -
[KafkaApi-1946108683] Error while retrieving topic metadata
kafka.admin.AdministrationException: replication factor: 2 larger than
available brokers: 1
at kafka.admin.AdminUtils$.assignReplicasToBrokers(AdminUtils.scala:62)
at kafka.admin.CreateTopicCommand$.createTopic(CreateTopicCommand.scala:92)
at
kafka.server.KafkaApis$$anonfun$handleTopicMetadataRequest$1.apply(KafkaApis.scala:409)
at
kafka.server.KafkaApis$$anonfun$handleTopicMetadataRequest$1.apply(KafkaApis.scala:401)
at scala.collection.immutable.Set$Set1.foreach(Set.scala:81)
at kafka.server.KafkaApis.handleTopicMetadataRequest(KafkaApis.scala:400)
at kafka.server.KafkaApis.handle(KafkaApis.scala:61)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:41)
at java.lang.Thread.run(Thread.java:680)

However, if after connecting, the number of brokers goes down, producing
clients have no problems continuing sending messages, etc.

So, I thought the idea was that once a replica becomes available, it will
be caught up with messages it might have missed, etc.  This is good because
it makes doing things like rolling restarts of the brokers possible, etc.
 But it's a problem if a rolling restart happens at the same time a new
client is coming online to try and initialize a connection.

Thoughts?

Shouldn't the requirements be the same for initial connections as ongoing
connections?

Jason

Re: Can't connect to a server if not enough partitions

Posted by Jun Rao <ju...@gmail.com>.
Currently, when creating a topic, we require the number of live brokers to
be greater than the replication factor. Once the topic is created, the
number of live brokers can be less than the replication factor.

Thanks,

Jun


On Wed, May 8, 2013 at 11:50 PM, Jason Rosenberg <jb...@squareup.com> wrote:

> Neha,
>
> Thanks, I think I did understand what was going (despite the error
> message).  And my question stands, if a broker is momentarily down,
> shouldn't we still be able to create a topic?  If we send a message to a
> topic it will succeed, even if not all replicas are available.  Why should
> the initial message be any different?
>
> Jason
>
>
> On Wed, May 8, 2013 at 10:57 PM, Neha Narkhede <neha.narkhede@gmail.com
> >wrote:
>
> > I think this error message is somewhat misleading since we create topic
> on
> > the first metadata request. It is complaining that a topic with the
> > required replication factor cannot be created if there aren't enough
> > brokers to satisfy the replication factor. This is expected behavior
> > whether you use auto creation of topics or manual creation. However, the
> > metadata requests will always give you correct information about existing
> > topics.
> >
> > Thanks,
> > Neha
> >
> >
> > On Wed, May 8, 2013 at 10:15 PM, Jason Rosenberg <jb...@squareup.com>
> wrote:
> >
> > > With 0.8.0, I'm seeing that an initial metadata request fails, if the
> > > number of running brokers is fewer than the configured replication
> > factor:
> > >
> > > 877 [kafka-request-handler-0] ERROR kafka.server.KafkaApis  -
> > > [KafkaApi-1946108683] Error while retrieving topic metadata
> > > kafka.admin.AdministrationException: replication factor: 2 larger than
> > > available brokers: 1
> > > at kafka.admin.AdminUtils$.assignReplicasToBrokers(AdminUtils.scala:62)
> > > at
> > kafka.admin.CreateTopicCommand$.createTopic(CreateTopicCommand.scala:92)
> > > at
> > >
> > >
> >
> kafka.server.KafkaApis$$anonfun$handleTopicMetadataRequest$1.apply(KafkaApis.scala:409)
> > > at
> > >
> > >
> >
> kafka.server.KafkaApis$$anonfun$handleTopicMetadataRequest$1.apply(KafkaApis.scala:401)
> > > at scala.collection.immutable.Set$Set1.foreach(Set.scala:81)
> > > at
> kafka.server.KafkaApis.handleTopicMetadataRequest(KafkaApis.scala:400)
> > > at kafka.server.KafkaApis.handle(KafkaApis.scala:61)
> > > at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:41)
> > > at java.lang.Thread.run(Thread.java:680)
> > >
> > > However, if after connecting, the number of brokers goes down,
> producing
> > > clients have no problems continuing sending messages, etc.
> > >
> > > So, I thought the idea was that once a replica becomes available, it
> will
> > > be caught up with messages it might have missed, etc.  This is good
> > because
> > > it makes doing things like rolling restarts of the brokers possible,
> etc.
> > >  But it's a problem if a rolling restart happens at the same time a new
> > > client is coming online to try and initialize a connection.
> > >
> > > Thoughts?
> > >
> > > Shouldn't the requirements be the same for initial connections as
> ongoing
> > > connections?
> > >
> > > Jason
> > >
> >
>

Re: Can't connect to a server if not enough partitions

Posted by Jason Rosenberg <jb...@squareup.com>.
Neha,

Thanks, I think I did understand what was going (despite the error
message).  And my question stands, if a broker is momentarily down,
shouldn't we still be able to create a topic?  If we send a message to a
topic it will succeed, even if not all replicas are available.  Why should
the initial message be any different?

Jason


On Wed, May 8, 2013 at 10:57 PM, Neha Narkhede <ne...@gmail.com>wrote:

> I think this error message is somewhat misleading since we create topic on
> the first metadata request. It is complaining that a topic with the
> required replication factor cannot be created if there aren't enough
> brokers to satisfy the replication factor. This is expected behavior
> whether you use auto creation of topics or manual creation. However, the
> metadata requests will always give you correct information about existing
> topics.
>
> Thanks,
> Neha
>
>
> On Wed, May 8, 2013 at 10:15 PM, Jason Rosenberg <jb...@squareup.com> wrote:
>
> > With 0.8.0, I'm seeing that an initial metadata request fails, if the
> > number of running brokers is fewer than the configured replication
> factor:
> >
> > 877 [kafka-request-handler-0] ERROR kafka.server.KafkaApis  -
> > [KafkaApi-1946108683] Error while retrieving topic metadata
> > kafka.admin.AdministrationException: replication factor: 2 larger than
> > available brokers: 1
> > at kafka.admin.AdminUtils$.assignReplicasToBrokers(AdminUtils.scala:62)
> > at
> kafka.admin.CreateTopicCommand$.createTopic(CreateTopicCommand.scala:92)
> > at
> >
> >
> kafka.server.KafkaApis$$anonfun$handleTopicMetadataRequest$1.apply(KafkaApis.scala:409)
> > at
> >
> >
> kafka.server.KafkaApis$$anonfun$handleTopicMetadataRequest$1.apply(KafkaApis.scala:401)
> > at scala.collection.immutable.Set$Set1.foreach(Set.scala:81)
> > at kafka.server.KafkaApis.handleTopicMetadataRequest(KafkaApis.scala:400)
> > at kafka.server.KafkaApis.handle(KafkaApis.scala:61)
> > at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:41)
> > at java.lang.Thread.run(Thread.java:680)
> >
> > However, if after connecting, the number of brokers goes down, producing
> > clients have no problems continuing sending messages, etc.
> >
> > So, I thought the idea was that once a replica becomes available, it will
> > be caught up with messages it might have missed, etc.  This is good
> because
> > it makes doing things like rolling restarts of the brokers possible, etc.
> >  But it's a problem if a rolling restart happens at the same time a new
> > client is coming online to try and initialize a connection.
> >
> > Thoughts?
> >
> > Shouldn't the requirements be the same for initial connections as ongoing
> > connections?
> >
> > Jason
> >
>

Re: Can't connect to a server if not enough partitions

Posted by Neha Narkhede <ne...@gmail.com>.
I think this error message is somewhat misleading since we create topic on
the first metadata request. It is complaining that a topic with the
required replication factor cannot be created if there aren't enough
brokers to satisfy the replication factor. This is expected behavior
whether you use auto creation of topics or manual creation. However, the
metadata requests will always give you correct information about existing
topics.

Thanks,
Neha


On Wed, May 8, 2013 at 10:15 PM, Jason Rosenberg <jb...@squareup.com> wrote:

> With 0.8.0, I'm seeing that an initial metadata request fails, if the
> number of running brokers is fewer than the configured replication factor:
>
> 877 [kafka-request-handler-0] ERROR kafka.server.KafkaApis  -
> [KafkaApi-1946108683] Error while retrieving topic metadata
> kafka.admin.AdministrationException: replication factor: 2 larger than
> available brokers: 1
> at kafka.admin.AdminUtils$.assignReplicasToBrokers(AdminUtils.scala:62)
> at kafka.admin.CreateTopicCommand$.createTopic(CreateTopicCommand.scala:92)
> at
>
> kafka.server.KafkaApis$$anonfun$handleTopicMetadataRequest$1.apply(KafkaApis.scala:409)
> at
>
> kafka.server.KafkaApis$$anonfun$handleTopicMetadataRequest$1.apply(KafkaApis.scala:401)
> at scala.collection.immutable.Set$Set1.foreach(Set.scala:81)
> at kafka.server.KafkaApis.handleTopicMetadataRequest(KafkaApis.scala:400)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:61)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:41)
> at java.lang.Thread.run(Thread.java:680)
>
> However, if after connecting, the number of brokers goes down, producing
> clients have no problems continuing sending messages, etc.
>
> So, I thought the idea was that once a replica becomes available, it will
> be caught up with messages it might have missed, etc.  This is good because
> it makes doing things like rolling restarts of the brokers possible, etc.
>  But it's a problem if a rolling restart happens at the same time a new
> client is coming online to try and initialize a connection.
>
> Thoughts?
>
> Shouldn't the requirements be the same for initial connections as ongoing
> connections?
>
> Jason
>