You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Murilo Tavares <mu...@gmail.com> on 2021/10/13 14:34:50 UTC

Downgrading KafkaStreams

Hi
I have a large KafkaStreams topology, and for a while I have failed to
upgrade it from version 2.4.1 to 2.7.0, and this time to version 2.8.1.
(keeps stuck on rebalance loop)
I was able to revert it from v2.7.0 back to 2.4.1 in the past, but now I
can't rollback my code, as I get the following error.
I have cleaned up the state via "streams.cleanUp()", but still can't get
rid of the error.
Any suggestion on how to downgrade it?


Exception in thread "streams-app-0-StreamThread-1"
org.apache.kafka.streams.errors.TaskAssignmentException: Unable to decode
assignment data: used version: 9; latest supported version: 5
at
org.apache.kafka.streams.processor.internals.assignment.AssignmentInfo.decode(AssignmentInfo.java:306)
at
org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.onAssignment(StreamsPartitionAssignor.java:1091)
at
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:391)
at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:421)
at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:340)
at
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:471)
at
org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1267)
at
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1231)
at
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211)
at
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:843)
at
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:743)
at
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:698)
at
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:671)

Re: Downgrading KafkaStreams

Posted by "Matthias J. Sax" <mj...@apache.org>.
Not sure from the top of my head. Maybe the logs could reveal something.



On 10/14/21 4:49 PM, Sophie Blee-Goldman wrote:
> Matthias, shouldn't version probing prevent this sort of thing from
> happening? That is, shouldn't
> a live downgrade always be possible? I know there were some bugs in the
> downgrade path for
> version probing in the past, but I'm pretty sure they should have been
> fixed by 2.4.1
> 
> This seems like a bug to me, no?
> 
> On Wed, Oct 13, 2021 at 9:52 AM Murilo Tavares <mu...@gmail.com> wrote:
> 
>> Thanks Mathias
>> In this case, I have one instance running.
>> Maybe it's the case that the session also needed to timeout?
>> Thanks
>> Murilo
>>
>>
>>
>> On Wed, 13 Oct 2021 at 12:25, Matthias J. Sax <mj...@apache.org> wrote:
>>
>>> For this case, it seems you cannot do a rolling downgrade, but you will
>>> need to stop all instances, before restarting with 2.4.1.
>>>
>>> -Matthias
>>>
>>> On 10/13/21 7:34 AM, Murilo Tavares wrote:
>>>> Hi
>>>> I have a large KafkaStreams topology, and for a while I have failed to
>>>> upgrade it from version 2.4.1 to 2.7.0, and this time to version 2.8.1.
>>>> (keeps stuck on rebalance loop)
>>>> I was able to revert it from v2.7.0 back to 2.4.1 in the past, but now
>> I
>>>> can't rollback my code, as I get the following error.
>>>> I have cleaned up the state via "streams.cleanUp()", but still can't
>> get
>>>> rid of the error.
>>>> Any suggestion on how to downgrade it?
>>>>
>>>>
>>>> Exception in thread "streams-app-0-StreamThread-1"
>>>> org.apache.kafka.streams.errors.TaskAssignmentException: Unable to
>> decode
>>>> assignment data: used version: 9; latest supported version: 5
>>>> at
>>>>
>>>
>> org.apache.kafka.streams.processor.internals.assignment.AssignmentInfo.decode(AssignmentInfo.java:306)
>>>> at
>>>>
>>>
>> org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.onAssignment(StreamsPartitionAssignor.java:1091)
>>>> at
>>>>
>>>
>> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:391)
>>>> at
>>>>
>>>
>> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:421)
>>>> at
>>>>
>>>
>> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:340)
>>>> at
>>>>
>>>
>> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:471)
>>>> at
>>>>
>>>
>> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1267)
>>>> at
>>>>
>>>
>> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1231)
>>>> at
>>>>
>>>
>> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211)
>>>> at
>>>>
>>>
>> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:843)
>>>> at
>>>>
>>>
>> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:743)
>>>> at
>>>>
>>>
>> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:698)
>>>> at
>>>>
>>>
>> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:671)
>>>>
>>>
>>
> 

Re: Downgrading KafkaStreams

Posted by Sophie Blee-Goldman <so...@confluent.io.INVALID>.
Matthias, shouldn't version probing prevent this sort of thing from
happening? That is, shouldn't
a live downgrade always be possible? I know there were some bugs in the
downgrade path for
version probing in the past, but I'm pretty sure they should have been
fixed by 2.4.1

This seems like a bug to me, no?

On Wed, Oct 13, 2021 at 9:52 AM Murilo Tavares <mu...@gmail.com> wrote:

> Thanks Mathias
> In this case, I have one instance running.
> Maybe it's the case that the session also needed to timeout?
> Thanks
> Murilo
>
>
>
> On Wed, 13 Oct 2021 at 12:25, Matthias J. Sax <mj...@apache.org> wrote:
>
> > For this case, it seems you cannot do a rolling downgrade, but you will
> > need to stop all instances, before restarting with 2.4.1.
> >
> > -Matthias
> >
> > On 10/13/21 7:34 AM, Murilo Tavares wrote:
> > > Hi
> > > I have a large KafkaStreams topology, and for a while I have failed to
> > > upgrade it from version 2.4.1 to 2.7.0, and this time to version 2.8.1.
> > > (keeps stuck on rebalance loop)
> > > I was able to revert it from v2.7.0 back to 2.4.1 in the past, but now
> I
> > > can't rollback my code, as I get the following error.
> > > I have cleaned up the state via "streams.cleanUp()", but still can't
> get
> > > rid of the error.
> > > Any suggestion on how to downgrade it?
> > >
> > >
> > > Exception in thread "streams-app-0-StreamThread-1"
> > > org.apache.kafka.streams.errors.TaskAssignmentException: Unable to
> decode
> > > assignment data: used version: 9; latest supported version: 5
> > > at
> > >
> >
> org.apache.kafka.streams.processor.internals.assignment.AssignmentInfo.decode(AssignmentInfo.java:306)
> > > at
> > >
> >
> org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.onAssignment(StreamsPartitionAssignor.java:1091)
> > > at
> > >
> >
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:391)
> > > at
> > >
> >
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:421)
> > > at
> > >
> >
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:340)
> > > at
> > >
> >
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:471)
> > > at
> > >
> >
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1267)
> > > at
> > >
> >
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1231)
> > > at
> > >
> >
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211)
> > > at
> > >
> >
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:843)
> > > at
> > >
> >
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:743)
> > > at
> > >
> >
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:698)
> > > at
> > >
> >
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:671)
> > >
> >
>

Re: Downgrading KafkaStreams

Posted by Murilo Tavares <mu...@gmail.com>.
Thanks Mathias
In this case, I have one instance running.
Maybe it's the case that the session also needed to timeout?
Thanks
Murilo



On Wed, 13 Oct 2021 at 12:25, Matthias J. Sax <mj...@apache.org> wrote:

> For this case, it seems you cannot do a rolling downgrade, but you will
> need to stop all instances, before restarting with 2.4.1.
>
> -Matthias
>
> On 10/13/21 7:34 AM, Murilo Tavares wrote:
> > Hi
> > I have a large KafkaStreams topology, and for a while I have failed to
> > upgrade it from version 2.4.1 to 2.7.0, and this time to version 2.8.1.
> > (keeps stuck on rebalance loop)
> > I was able to revert it from v2.7.0 back to 2.4.1 in the past, but now I
> > can't rollback my code, as I get the following error.
> > I have cleaned up the state via "streams.cleanUp()", but still can't get
> > rid of the error.
> > Any suggestion on how to downgrade it?
> >
> >
> > Exception in thread "streams-app-0-StreamThread-1"
> > org.apache.kafka.streams.errors.TaskAssignmentException: Unable to decode
> > assignment data: used version: 9; latest supported version: 5
> > at
> >
> org.apache.kafka.streams.processor.internals.assignment.AssignmentInfo.decode(AssignmentInfo.java:306)
> > at
> >
> org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.onAssignment(StreamsPartitionAssignor.java:1091)
> > at
> >
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:391)
> > at
> >
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:421)
> > at
> >
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:340)
> > at
> >
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:471)
> > at
> >
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1267)
> > at
> >
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1231)
> > at
> >
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211)
> > at
> >
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:843)
> > at
> >
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:743)
> > at
> >
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:698)
> > at
> >
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:671)
> >
>

Re: Downgrading KafkaStreams

Posted by "Matthias J. Sax" <mj...@apache.org>.
For this case, it seems you cannot do a rolling downgrade, but you will 
need to stop all instances, before restarting with 2.4.1.

-Matthias

On 10/13/21 7:34 AM, Murilo Tavares wrote:
> Hi
> I have a large KafkaStreams topology, and for a while I have failed to
> upgrade it from version 2.4.1 to 2.7.0, and this time to version 2.8.1.
> (keeps stuck on rebalance loop)
> I was able to revert it from v2.7.0 back to 2.4.1 in the past, but now I
> can't rollback my code, as I get the following error.
> I have cleaned up the state via "streams.cleanUp()", but still can't get
> rid of the error.
> Any suggestion on how to downgrade it?
> 
> 
> Exception in thread "streams-app-0-StreamThread-1"
> org.apache.kafka.streams.errors.TaskAssignmentException: Unable to decode
> assignment data: used version: 9; latest supported version: 5
> at
> org.apache.kafka.streams.processor.internals.assignment.AssignmentInfo.decode(AssignmentInfo.java:306)
> at
> org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.onAssignment(StreamsPartitionAssignor.java:1091)
> at
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:391)
> at
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:421)
> at
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:340)
> at
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:471)
> at
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1267)
> at
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1231)
> at
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:843)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:743)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:698)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:671)
>