You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Gary Ogden <go...@gmail.com> on 2015/02/25 14:08:13 UTC

non-blocking sends when cluster is down

Say the entire kafka cluster is down and there's no brokers to connect to.
Is it possible to use the java producer send method and not block until
there's a timeout?  Is it as simple as registering a callback method?

We need the ability for our application to not have any kind of delay when
sending messages and the cluster is down.  It's ok if the messages are lost
when the cluster is down.

Thanks!

Re: non-blocking sends when cluster is down

Posted by Gary Ogden <go...@gmail.com>.
Thanks Steven. We changed the code to ensure that the producer is only
created one and reused so that the metadata fetch doesn't happen every
send() call.

On 26 February 2015 at 12:44, Steven Wu <st...@gmail.com> wrote:

> metadata fetch only happens/blocks for the first time you call send. after
> the metadata is retrieved can cached in memory. it will not block again. so
> yes, there is a possibility it can block. of course, if cluster is down and
> metadata was never fetched, then every send can block.
>
> metadata is also refreshed periodically after the first fetch.
> metadata.max.age.ms=300000
>
>
> On Thu, Feb 26, 2015 at 4:47 AM, Gary Ogden <go...@gmail.com> wrote:
>
> > I was actually referring to the metadata fetch. Sorry I should have been
> > more descriptive. I know we can decrease the metadata.fetch.timeout.ms
> > setting to be a lot lower, but it's still blocking if it can't get the
> > metadata. And I believe that the metadata fetch happens every time we
> call
> > send()?
> >
> > On 25 February 2015 at 19:03, Guozhang Wang <wa...@gmail.com> wrote:
> >
> > > Hi Gray,
> > >
> > > The Java producer will block on send() when the buffer is full and
> > > block.on.buffer.full = true (
> > > http://kafka.apache.org/documentation.html#newproducerconfigs). If you
> > set
> > > the config to false the send() call will throw a
> BufferExhaustedException
> > > which, in your case, can be caught and ignore and allow the message to
> > drop
> > > on the floor.
> > >
> > > Guozhang
> > >
> > >
> > >
> > > On Wed, Feb 25, 2015 at 5:08 AM, Gary Ogden <go...@gmail.com> wrote:
> > >
> > > > Say the entire kafka cluster is down and there's no brokers to
> connect
> > > to.
> > > > Is it possible to use the java producer send method and not block
> until
> > > > there's a timeout?  Is it as simple as registering a callback method?
> > > >
> > > > We need the ability for our application to not have any kind of delay
> > > when
> > > > sending messages and the cluster is down.  It's ok if the messages
> are
> > > lost
> > > > when the cluster is down.
> > > >
> > > > Thanks!
> > > >
> > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
>

Re: non-blocking sends when cluster is down

Posted by Steven Wu <st...@gmail.com>.
metadata fetch only happens/blocks for the first time you call send. after
the metadata is retrieved can cached in memory. it will not block again. so
yes, there is a possibility it can block. of course, if cluster is down and
metadata was never fetched, then every send can block.

metadata is also refreshed periodically after the first fetch.
metadata.max.age.ms=300000


On Thu, Feb 26, 2015 at 4:47 AM, Gary Ogden <go...@gmail.com> wrote:

> I was actually referring to the metadata fetch. Sorry I should have been
> more descriptive. I know we can decrease the metadata.fetch.timeout.ms
> setting to be a lot lower, but it's still blocking if it can't get the
> metadata. And I believe that the metadata fetch happens every time we call
> send()?
>
> On 25 February 2015 at 19:03, Guozhang Wang <wa...@gmail.com> wrote:
>
> > Hi Gray,
> >
> > The Java producer will block on send() when the buffer is full and
> > block.on.buffer.full = true (
> > http://kafka.apache.org/documentation.html#newproducerconfigs). If you
> set
> > the config to false the send() call will throw a BufferExhaustedException
> > which, in your case, can be caught and ignore and allow the message to
> drop
> > on the floor.
> >
> > Guozhang
> >
> >
> >
> > On Wed, Feb 25, 2015 at 5:08 AM, Gary Ogden <go...@gmail.com> wrote:
> >
> > > Say the entire kafka cluster is down and there's no brokers to connect
> > to.
> > > Is it possible to use the java producer send method and not block until
> > > there's a timeout?  Is it as simple as registering a callback method?
> > >
> > > We need the ability for our application to not have any kind of delay
> > when
> > > sending messages and the cluster is down.  It's ok if the messages are
> > lost
> > > when the cluster is down.
> > >
> > > Thanks!
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>

Re: non-blocking sends when cluster is down

Posted by Gary Ogden <go...@gmail.com>.
I was actually referring to the metadata fetch. Sorry I should have been
more descriptive. I know we can decrease the metadata.fetch.timeout.ms
setting to be a lot lower, but it's still blocking if it can't get the
metadata. And I believe that the metadata fetch happens every time we call
send()?

On 25 February 2015 at 19:03, Guozhang Wang <wa...@gmail.com> wrote:

> Hi Gray,
>
> The Java producer will block on send() when the buffer is full and
> block.on.buffer.full = true (
> http://kafka.apache.org/documentation.html#newproducerconfigs). If you set
> the config to false the send() call will throw a BufferExhaustedException
> which, in your case, can be caught and ignore and allow the message to drop
> on the floor.
>
> Guozhang
>
>
>
> On Wed, Feb 25, 2015 at 5:08 AM, Gary Ogden <go...@gmail.com> wrote:
>
> > Say the entire kafka cluster is down and there's no brokers to connect
> to.
> > Is it possible to use the java producer send method and not block until
> > there's a timeout?  Is it as simple as registering a callback method?
> >
> > We need the ability for our application to not have any kind of delay
> when
> > sending messages and the cluster is down.  It's ok if the messages are
> lost
> > when the cluster is down.
> >
> > Thanks!
> >
>
>
>
> --
> -- Guozhang
>

Re: non-blocking sends when cluster is down

Posted by Guozhang Wang <wa...@gmail.com>.
Hi Gray,

The Java producer will block on send() when the buffer is full and
block.on.buffer.full = true (
http://kafka.apache.org/documentation.html#newproducerconfigs). If you set
the config to false the send() call will throw a BufferExhaustedException
which, in your case, can be caught and ignore and allow the message to drop
on the floor.

Guozhang



On Wed, Feb 25, 2015 at 5:08 AM, Gary Ogden <go...@gmail.com> wrote:

> Say the entire kafka cluster is down and there's no brokers to connect to.
> Is it possible to use the java producer send method and not block until
> there's a timeout?  Is it as simple as registering a callback method?
>
> We need the ability for our application to not have any kind of delay when
> sending messages and the cluster is down.  It's ok if the messages are lost
> when the cluster is down.
>
> Thanks!
>



-- 
-- Guozhang