You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ignite.apache.org by Naveen Kumar <na...@gmail.com> on 2021/09/08 06:42:20 UTC

Partition states validation has filed for group: CUSTOMER_KV

Hi

We are using Ignite 2.8.1

We are trying to build a new cluster by restoring the datastore from
another working cluster.
Steps followed

1. Stopped the updates on the source cluster
2. Took a copy of datastore on each node and transferred to the destination
node
3. started nodes on the destination cluster

AFter the cluster is activated, we could see count mismatch for 2 caches
(around 15K records) and we found some warnings for these 2 caches.
Attached the exact warning,

[GridDhtPartitionsExchangeFuture] Partition states validation has failed
for group: CL_CUSTOMER_KV, msg: Partitions cache sizes are inconsistent for
part 310: [lvign002b.xxxx.com=874, lvign001b.xxxx.com=875] etc..

What could be the reason for this count mismatch.

Thanks





-- 
Thanks & Regards,
Naveen Bandaru

Re: Partition states validation has filed for group: CUSTOMER_KV

Posted by Pavel Kovalenko <jo...@gmail.com>.
Hi Naveen,

I think just stopping updates is not enough to make a consistent snapshot
of the partition stores.
You must ensure that all updates are also checkpointed to disk. Otherwise,
to restore a valid snapshot you must copy WAL as well as partition stores.
You can try to deactivate the source cluster, make a copy of the partition
stores, and then activate it again.


чт, 9 сент. 2021 г. в 15:42, Naveen Kumar <na...@gmail.com>:

> Any pointers or clues on this issue.
>
> If it the issue with the source cluster or something to do with the target
> cluster ?
> Does the clean restart of the source cluster help here in any
> way, inconsistent partitions becoming consistent etc ?
>
> Thanks
>
> On Wed, Sep 8, 2021 at 12:12 PM Naveen Kumar <na...@gmail.com>
> wrote:
>
>> Hi
>>
>> We are using Ignite 2.8.1
>>
>> We are trying to build a new cluster by restoring the datastore from
>> another working cluster.
>> Steps followed
>>
>> 1. Stopped the updates on the source cluster
>> 2. Took a copy of datastore on each node and transferred to the
>> destination node
>> 3. started nodes on the destination cluster
>>
>> AFter the cluster is activated, we could see count mismatch for 2 caches
>> (around 15K records) and we found some warnings for these 2 caches.
>> Attached the exact warning,
>>
>> [GridDhtPartitionsExchangeFuture] Partition states validation has failed
>> for group: CL_CUSTOMER_KV, msg: Partitions cache sizes are inconsistent for
>> part 310: [lvign002b.xxxx.com=874, lvign001b.xxxx.com=875] etc..
>>
>> What could be the reason for this count mismatch.
>>
>> Thanks
>>
>>
>>
>>
>>
>> --
>> Thanks & Regards,
>> Naveen Bandaru
>>
>
>
> --
> Thanks & Regards,
> Naveen Bandaru
>

Re: Partition states validation has filed for group: CUSTOMER_KV

Posted by Naveen Kumar <na...@gmail.com>.
Any pointers or clues on this issue.

If it the issue with the source cluster or something to do with the target
cluster ?
Does the clean restart of the source cluster help here in any
way, inconsistent partitions becoming consistent etc ?

Thanks

On Wed, Sep 8, 2021 at 12:12 PM Naveen Kumar <na...@gmail.com>
wrote:

> Hi
>
> We are using Ignite 2.8.1
>
> We are trying to build a new cluster by restoring the datastore from
> another working cluster.
> Steps followed
>
> 1. Stopped the updates on the source cluster
> 2. Took a copy of datastore on each node and transferred to the
> destination node
> 3. started nodes on the destination cluster
>
> AFter the cluster is activated, we could see count mismatch for 2 caches
> (around 15K records) and we found some warnings for these 2 caches.
> Attached the exact warning,
>
> [GridDhtPartitionsExchangeFuture] Partition states validation has failed
> for group: CL_CUSTOMER_KV, msg: Partitions cache sizes are inconsistent for
> part 310: [lvign002b.xxxx.com=874, lvign001b.xxxx.com=875] etc..
>
> What could be the reason for this count mismatch.
>
> Thanks
>
>
>
>
>
> --
> Thanks & Regards,
> Naveen Bandaru
>


-- 
Thanks & Regards,
Naveen Bandaru