You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Richard Lee <rd...@tivo.com> on 2015/10/01 19:21:33 UTC

Re: are 0.8.2.1 and 0.9.0.0 compatible?

Note the 0.8.3-SNAPSHOT has recently been renamed 0.9.0.0-SNAPSHOT.

In any event, the major version number change could indicate that there 
has, in fact, been some sort of incompatible change.  Using 0.9.0.0, I'm 
also unable to use the kafka-console-consumer.sh to read from a 0.8.2.1 
broker, but it works fine with a 0.9.0.0 broker.

Some validation from a kafka expert that broker forward compatibility 
(or client backward compatibility) is not supported would be 
appreciated, and that this isn't just a case of some sort of local, 
fixable misconfiguration.

Thanks!
Richard

On 09/30/2015 11:17 AM, Doug Tomm wrote:
> hello,
>
> i've got a set of broker nodes running 0.8.2.1.  on my laptop i'm also 
> running 0.8.2.1, and i have a single broker node and mirrormaker 
> there.  i'm also using kafka-console-consumer.sh on the mac to display 
> messages on a favorite topic being published from the broker nodes.  
> there are no messages on the topic, but everything is well-behaved.  i 
> can inject messages with kafkacat and everything is fine.
>
> but then!
>
> on the laptop i switched everything to 0.8.3 but left the broker nodes 
> alone.  now when i run mirrormaker i see this:
>
> [2015-09-30 10:44:55,090] WARN 
> [ConsumerFetcherThread-tivo_kafka_110339-mbpr.local-1443635093396-c55cbafb-0-5], 
> Error in fetch 
> kafka.consumer.ConsumerFetcherThread$FetchRequest@61cb11c5. Possible 
> cause: java.nio.BufferUnderflowException 
> (kafka.consumer.ConsumerFetcherThread)
> [2015-09-30 10:44:55,624] WARN 
> [ConsumerFetcherThread-tivo_kafka_110339-mbpr.local-1443635093396-c55cbafb-0-5], 
> Error in fetch 
> kafka.consumer.ConsumerFetcherThread$FetchRequest@3c7bb986. Possible 
> cause: java.nio.BufferUnderflowException 
> (kafka.consumer.ConsumerFetcherThread)
> [2015-09-30 10:44:56,181] WARN 
> [ConsumerFetcherThread-tivo_kafka_110339-mbpr.local-1443635093396-c55cbafb-0-5], 
> Error in fetch 
> kafka.consumer.ConsumerFetcherThread$FetchRequest@1d4fbd2c. Possible 
> cause: java.nio.BufferUnderflowException 
> (kafka.consumer.ConsumerFetcherThread)
> [2015-09-30 10:44:56,726] WARN 
> [ConsumerFetcherThread-tivo_kafka_110339-mbpr.local-1443635093396-c55cbafb-0-5], 
> Error in fetch 
> kafka.consumer.ConsumerFetcherThread$FetchRequest@59e67b2f. Possible 
> cause: java.nio.BufferUnderflowException 
> (kafka.consumer.ConsumerFetcherThread)
>
> if i use kafkacat to generate a message on the topic i see 
> IllegalArgumentExceptions instead.
>
> this suggests that the two versions of kafka aren't compatible. is 
> this the case?  does the whole ecosystem need to be on the same version?
>
> thank you,
> doug
>


Re: are 0.8.2.1 and 0.9.0.0 compatible?

Posted by Grant Henke <gh...@cloudera.com>.
Jason,

The version number in the docs does need to be updated, and will be before
the release.

I agree that removing the property rather than setting it to the new
version might be better. However, others may want to set the new version in
order to already have the value hard coded for the next upgrade. Both are
valid options.

Thanks,
Grant

On Thu, Oct 1, 2015 at 1:35 PM, Jason Rosenberg <jb...@squareup.com> wrote:

> Of course, that documentation needs to be updated to refer to '0.9.X'!
>
> Also, I'm wondering if the last step there should be changed to remove the
> property altogether and restart (rather than setting it to the new
> version), since once the code is updated, it will use that by default?
>
> On Thu, Oct 1, 2015 at 1:48 PM, Grant Henke <gh...@cloudera.com> wrote:
>
> > Hi Richard,
> >
> > You are correct that version will now be 0.9.0 and anything referencing
> > 0.8.3 is being changed. You are also correct in the there have been wire
> > protocol changes that break compatibility. However, backwards
> compatibility
> > exists and you should always upgrade your brokers before upgrading your
> > clients in order to avoid issues (In the future KIP-35
> > <http://KIP-35+-+Retrieving+protocol+version> may change that).
> >
> > It's also worth noting that if you are performing a rolling upgrade of
> your
> > brokers, you need to be sure brokers running the new protocol know to
> > communicate with the old version to remain compatible during the bounce.
> > This is done using the inter.broker.protocol.version property. More on
> that
> > topic can be read here:
> > https://kafka.apache.org/083/documentation.html#upgrade
> >
> > Hopefully that helps clear things up.
> >
> > Thank you,
> > Grant
> >
> >
> >
> >
> >
> > On Thu, Oct 1, 2015 at 12:21 PM, Richard Lee <rd...@tivo.com> wrote:
> >
> > > Note the 0.8.3-SNAPSHOT has recently been renamed 0.9.0.0-SNAPSHOT.
> > >
> > > In any event, the major version number change could indicate that there
> > > has, in fact, been some sort of incompatible change.  Using 0.9.0.0,
> I'm
> > > also unable to use the kafka-console-consumer.sh to read from a 0.8.2.1
> > > broker, but it works fine with a 0.9.0.0 broker.
> > >
> > > Some validation from a kafka expert that broker forward compatibility
> (or
> > > client backward compatibility) is not supported would be appreciated,
> and
> > > that this isn't just a case of some sort of local, fixable
> > misconfiguration.
> > >
> > > Thanks!
> > > Richard
> > >
> > > On 09/30/2015 11:17 AM, Doug Tomm wrote:
> > >
> > >> hello,
> > >>
> > >> i've got a set of broker nodes running 0.8.2.1.  on my laptop i'm also
> > >> running 0.8.2.1, and i have a single broker node and mirrormaker
> there.
> > >> i'm also using kafka-console-consumer.sh on the mac to display
> messages
> > on
> > >> a favorite topic being published from the broker nodes.  there are no
> > >> messages on the topic, but everything is well-behaved.  i can inject
> > >> messages with kafkacat and everything is fine.
> > >>
> > >> but then!
> > >>
> > >> on the laptop i switched everything to 0.8.3 but left the broker nodes
> > >> alone.  now when i run mirrormaker i see this:
> > >>
> > >> [2015-09-30 10:44:55,090] WARN
> > >>
> >
> [ConsumerFetcherThread-tivo_kafka_110339-mbpr.local-1443635093396-c55cbafb-0-5],
> > >> Error in fetch
> > kafka.consumer.ConsumerFetcherThread$FetchRequest@61cb11c5.
> > >> Possible cause: java.nio.BufferUnderflowException
> > >> (kafka.consumer.ConsumerFetcherThread)
> > >> [2015-09-30 10:44:55,624] WARN
> > >>
> >
> [ConsumerFetcherThread-tivo_kafka_110339-mbpr.local-1443635093396-c55cbafb-0-5],
> > >> Error in fetch
> > kafka.consumer.ConsumerFetcherThread$FetchRequest@3c7bb986.
> > >> Possible cause: java.nio.BufferUnderflowException
> > >> (kafka.consumer.ConsumerFetcherThread)
> > >> [2015-09-30 10:44:56,181] WARN
> > >>
> >
> [ConsumerFetcherThread-tivo_kafka_110339-mbpr.local-1443635093396-c55cbafb-0-5],
> > >> Error in fetch
> > kafka.consumer.ConsumerFetcherThread$FetchRequest@1d4fbd2c.
> > >> Possible cause: java.nio.BufferUnderflowException
> > >> (kafka.consumer.ConsumerFetcherThread)
> > >> [2015-09-30 10:44:56,726] WARN
> > >>
> >
> [ConsumerFetcherThread-tivo_kafka_110339-mbpr.local-1443635093396-c55cbafb-0-5],
> > >> Error in fetch
> > kafka.consumer.ConsumerFetcherThread$FetchRequest@59e67b2f.
> > >> Possible cause: java.nio.BufferUnderflowException
> > >> (kafka.consumer.ConsumerFetcherThread)
> > >>
> > >> if i use kafkacat to generate a message on the topic i see
> > >> IllegalArgumentExceptions instead.
> > >>
> > >> this suggests that the two versions of kafka aren't compatible. is
> this
> > >> the case?  does the whole ecosystem need to be on the same version?
> > >>
> > >> thank you,
> > >> doug
> > >>
> > >>
> > >
> >
> >
> > --
> > Grant Henke
> > Software Engineer | Cloudera
> > grant@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
> >
>



-- 
Grant Henke
Software Engineer | Cloudera
grant@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke

Re: are 0.8.2.1 and 0.9.0.0 compatible?

Posted by Jason Rosenberg <jb...@squareup.com>.
Of course, that documentation needs to be updated to refer to '0.9.X'!

Also, I'm wondering if the last step there should be changed to remove the
property altogether and restart (rather than setting it to the new
version), since once the code is updated, it will use that by default?

On Thu, Oct 1, 2015 at 1:48 PM, Grant Henke <gh...@cloudera.com> wrote:

> Hi Richard,
>
> You are correct that version will now be 0.9.0 and anything referencing
> 0.8.3 is being changed. You are also correct in the there have been wire
> protocol changes that break compatibility. However, backwards compatibility
> exists and you should always upgrade your brokers before upgrading your
> clients in order to avoid issues (In the future KIP-35
> <http://KIP-35+-+Retrieving+protocol+version> may change that).
>
> It's also worth noting that if you are performing a rolling upgrade of your
> brokers, you need to be sure brokers running the new protocol know to
> communicate with the old version to remain compatible during the bounce.
> This is done using the inter.broker.protocol.version property. More on that
> topic can be read here:
> https://kafka.apache.org/083/documentation.html#upgrade
>
> Hopefully that helps clear things up.
>
> Thank you,
> Grant
>
>
>
>
>
> On Thu, Oct 1, 2015 at 12:21 PM, Richard Lee <rd...@tivo.com> wrote:
>
> > Note the 0.8.3-SNAPSHOT has recently been renamed 0.9.0.0-SNAPSHOT.
> >
> > In any event, the major version number change could indicate that there
> > has, in fact, been some sort of incompatible change.  Using 0.9.0.0, I'm
> > also unable to use the kafka-console-consumer.sh to read from a 0.8.2.1
> > broker, but it works fine with a 0.9.0.0 broker.
> >
> > Some validation from a kafka expert that broker forward compatibility (or
> > client backward compatibility) is not supported would be appreciated, and
> > that this isn't just a case of some sort of local, fixable
> misconfiguration.
> >
> > Thanks!
> > Richard
> >
> > On 09/30/2015 11:17 AM, Doug Tomm wrote:
> >
> >> hello,
> >>
> >> i've got a set of broker nodes running 0.8.2.1.  on my laptop i'm also
> >> running 0.8.2.1, and i have a single broker node and mirrormaker there.
> >> i'm also using kafka-console-consumer.sh on the mac to display messages
> on
> >> a favorite topic being published from the broker nodes.  there are no
> >> messages on the topic, but everything is well-behaved.  i can inject
> >> messages with kafkacat and everything is fine.
> >>
> >> but then!
> >>
> >> on the laptop i switched everything to 0.8.3 but left the broker nodes
> >> alone.  now when i run mirrormaker i see this:
> >>
> >> [2015-09-30 10:44:55,090] WARN
> >>
> [ConsumerFetcherThread-tivo_kafka_110339-mbpr.local-1443635093396-c55cbafb-0-5],
> >> Error in fetch
> kafka.consumer.ConsumerFetcherThread$FetchRequest@61cb11c5.
> >> Possible cause: java.nio.BufferUnderflowException
> >> (kafka.consumer.ConsumerFetcherThread)
> >> [2015-09-30 10:44:55,624] WARN
> >>
> [ConsumerFetcherThread-tivo_kafka_110339-mbpr.local-1443635093396-c55cbafb-0-5],
> >> Error in fetch
> kafka.consumer.ConsumerFetcherThread$FetchRequest@3c7bb986.
> >> Possible cause: java.nio.BufferUnderflowException
> >> (kafka.consumer.ConsumerFetcherThread)
> >> [2015-09-30 10:44:56,181] WARN
> >>
> [ConsumerFetcherThread-tivo_kafka_110339-mbpr.local-1443635093396-c55cbafb-0-5],
> >> Error in fetch
> kafka.consumer.ConsumerFetcherThread$FetchRequest@1d4fbd2c.
> >> Possible cause: java.nio.BufferUnderflowException
> >> (kafka.consumer.ConsumerFetcherThread)
> >> [2015-09-30 10:44:56,726] WARN
> >>
> [ConsumerFetcherThread-tivo_kafka_110339-mbpr.local-1443635093396-c55cbafb-0-5],
> >> Error in fetch
> kafka.consumer.ConsumerFetcherThread$FetchRequest@59e67b2f.
> >> Possible cause: java.nio.BufferUnderflowException
> >> (kafka.consumer.ConsumerFetcherThread)
> >>
> >> if i use kafkacat to generate a message on the topic i see
> >> IllegalArgumentExceptions instead.
> >>
> >> this suggests that the two versions of kafka aren't compatible. is this
> >> the case?  does the whole ecosystem need to be on the same version?
> >>
> >> thank you,
> >> doug
> >>
> >>
> >
>
>
> --
> Grant Henke
> Software Engineer | Cloudera
> grant@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
>

Re: are 0.8.2.1 and 0.9.0.0 compatible?

Posted by Richard Lee <rd...@tivo.com>.
Great.. that makes sense.  Forward compatibility by brokers is likely 
hard, tho it would be nice if clients were backward compatible.  I 
guess, tho, implementing that requires KIP-35.

Thanks for the 0.9.0.0 rolling update pointer.

Richard

On 10/01/2015 10:48 AM, Grant Henke wrote:
> Hi Richard,
>
> You are correct that version will now be 0.9.0 and anything referencing
> 0.8.3 is being changed. You are also correct in the there have been wire
> protocol changes that break compatibility. However, backwards compatibility
> exists and you should always upgrade your brokers before upgrading your
> clients in order to avoid issues (In the future KIP-35
> <http://KIP-35+-+Retrieving+protocol+version> may change that).
>
> It's also worth noting that if you are performing a rolling upgrade of your
> brokers, you need to be sure brokers running the new protocol know to
> communicate with the old version to remain compatible during the bounce.
> This is done using the inter.broker.protocol.version property. More on that
> topic can be read here:
> https://kafka.apache.org/083/documentation.html#upgrade
>
> Hopefully that helps clear things up.
>
> Thank you,
> Grant
>
>
>
>
>
> On Thu, Oct 1, 2015 at 12:21 PM, Richard Lee <rd...@tivo.com> wrote:
>
>> Note the 0.8.3-SNAPSHOT has recently been renamed 0.9.0.0-SNAPSHOT.
>>
>> In any event, the major version number change could indicate that there
>> has, in fact, been some sort of incompatible change.  Using 0.9.0.0, I'm
>> also unable to use the kafka-console-consumer.sh to read from a 0.8.2.1
>> broker, but it works fine with a 0.9.0.0 broker.
>>
>> Some validation from a kafka expert that broker forward compatibility (or
>> client backward compatibility) is not supported would be appreciated, and
>> that this isn't just a case of some sort of local, fixable misconfiguration.
>>
>> Thanks!
>> Richard
>>
>> On 09/30/2015 11:17 AM, Doug Tomm wrote:
>>
>>> hello,
>>>
>>> i've got a set of broker nodes running 0.8.2.1.  on my laptop i'm also
>>> running 0.8.2.1, and i have a single broker node and mirrormaker there.
>>> i'm also using kafka-console-consumer.sh on the mac to display messages on
>>> a favorite topic being published from the broker nodes.  there are no
>>> messages on the topic, but everything is well-behaved.  i can inject
>>> messages with kafkacat and everything is fine.
>>>
>>> but then!
>>>
>>> on the laptop i switched everything to 0.8.3 but left the broker nodes
>>> alone.  now when i run mirrormaker i see this:
>>>
>>> [2015-09-30 10:44:55,090] WARN
>>> [ConsumerFetcherThread-tivo_kafka_110339-mbpr.local-1443635093396-c55cbafb-0-5],
>>> Error in fetch kafka.consumer.ConsumerFetcherThread$FetchRequest@61cb11c5.
>>> Possible cause: java.nio.BufferUnderflowException
>>> (kafka.consumer.ConsumerFetcherThread)
>>> [2015-09-30 10:44:55,624] WARN
>>> [ConsumerFetcherThread-tivo_kafka_110339-mbpr.local-1443635093396-c55cbafb-0-5],
>>> Error in fetch kafka.consumer.ConsumerFetcherThread$FetchRequest@3c7bb986.
>>> Possible cause: java.nio.BufferUnderflowException
>>> (kafka.consumer.ConsumerFetcherThread)
>>> [2015-09-30 10:44:56,181] WARN
>>> [ConsumerFetcherThread-tivo_kafka_110339-mbpr.local-1443635093396-c55cbafb-0-5],
>>> Error in fetch kafka.consumer.ConsumerFetcherThread$FetchRequest@1d4fbd2c.
>>> Possible cause: java.nio.BufferUnderflowException
>>> (kafka.consumer.ConsumerFetcherThread)
>>> [2015-09-30 10:44:56,726] WARN
>>> [ConsumerFetcherThread-tivo_kafka_110339-mbpr.local-1443635093396-c55cbafb-0-5],
>>> Error in fetch kafka.consumer.ConsumerFetcherThread$FetchRequest@59e67b2f.
>>> Possible cause: java.nio.BufferUnderflowException
>>> (kafka.consumer.ConsumerFetcherThread)
>>>
>>> if i use kafkacat to generate a message on the topic i see
>>> IllegalArgumentExceptions instead.
>>>
>>> this suggests that the two versions of kafka aren't compatible. is this
>>> the case?  does the whole ecosystem need to be on the same version?
>>>
>>> thank you,
>>> doug
>>>
>>>
>


Re: are 0.8.2.1 and 0.9.0.0 compatible?

Posted by Grant Henke <gh...@cloudera.com>.
Hi Richard,

You are correct that version will now be 0.9.0 and anything referencing
0.8.3 is being changed. You are also correct in the there have been wire
protocol changes that break compatibility. However, backwards compatibility
exists and you should always upgrade your brokers before upgrading your
clients in order to avoid issues (In the future KIP-35
<http://KIP-35+-+Retrieving+protocol+version> may change that).

It's also worth noting that if you are performing a rolling upgrade of your
brokers, you need to be sure brokers running the new protocol know to
communicate with the old version to remain compatible during the bounce.
This is done using the inter.broker.protocol.version property. More on that
topic can be read here:
https://kafka.apache.org/083/documentation.html#upgrade

Hopefully that helps clear things up.

Thank you,
Grant





On Thu, Oct 1, 2015 at 12:21 PM, Richard Lee <rd...@tivo.com> wrote:

> Note the 0.8.3-SNAPSHOT has recently been renamed 0.9.0.0-SNAPSHOT.
>
> In any event, the major version number change could indicate that there
> has, in fact, been some sort of incompatible change.  Using 0.9.0.0, I'm
> also unable to use the kafka-console-consumer.sh to read from a 0.8.2.1
> broker, but it works fine with a 0.9.0.0 broker.
>
> Some validation from a kafka expert that broker forward compatibility (or
> client backward compatibility) is not supported would be appreciated, and
> that this isn't just a case of some sort of local, fixable misconfiguration.
>
> Thanks!
> Richard
>
> On 09/30/2015 11:17 AM, Doug Tomm wrote:
>
>> hello,
>>
>> i've got a set of broker nodes running 0.8.2.1.  on my laptop i'm also
>> running 0.8.2.1, and i have a single broker node and mirrormaker there.
>> i'm also using kafka-console-consumer.sh on the mac to display messages on
>> a favorite topic being published from the broker nodes.  there are no
>> messages on the topic, but everything is well-behaved.  i can inject
>> messages with kafkacat and everything is fine.
>>
>> but then!
>>
>> on the laptop i switched everything to 0.8.3 but left the broker nodes
>> alone.  now when i run mirrormaker i see this:
>>
>> [2015-09-30 10:44:55,090] WARN
>> [ConsumerFetcherThread-tivo_kafka_110339-mbpr.local-1443635093396-c55cbafb-0-5],
>> Error in fetch kafka.consumer.ConsumerFetcherThread$FetchRequest@61cb11c5.
>> Possible cause: java.nio.BufferUnderflowException
>> (kafka.consumer.ConsumerFetcherThread)
>> [2015-09-30 10:44:55,624] WARN
>> [ConsumerFetcherThread-tivo_kafka_110339-mbpr.local-1443635093396-c55cbafb-0-5],
>> Error in fetch kafka.consumer.ConsumerFetcherThread$FetchRequest@3c7bb986.
>> Possible cause: java.nio.BufferUnderflowException
>> (kafka.consumer.ConsumerFetcherThread)
>> [2015-09-30 10:44:56,181] WARN
>> [ConsumerFetcherThread-tivo_kafka_110339-mbpr.local-1443635093396-c55cbafb-0-5],
>> Error in fetch kafka.consumer.ConsumerFetcherThread$FetchRequest@1d4fbd2c.
>> Possible cause: java.nio.BufferUnderflowException
>> (kafka.consumer.ConsumerFetcherThread)
>> [2015-09-30 10:44:56,726] WARN
>> [ConsumerFetcherThread-tivo_kafka_110339-mbpr.local-1443635093396-c55cbafb-0-5],
>> Error in fetch kafka.consumer.ConsumerFetcherThread$FetchRequest@59e67b2f.
>> Possible cause: java.nio.BufferUnderflowException
>> (kafka.consumer.ConsumerFetcherThread)
>>
>> if i use kafkacat to generate a message on the topic i see
>> IllegalArgumentExceptions instead.
>>
>> this suggests that the two versions of kafka aren't compatible. is this
>> the case?  does the whole ecosystem need to be on the same version?
>>
>> thank you,
>> doug
>>
>>
>


-- 
Grant Henke
Software Engineer | Cloudera
grant@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke