You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@geode.apache.org by "Mark Hanson (Jira)" <ji...@apache.org> on 2021/11/17 18:23:00 UTC

[jira] [Assigned] (GEODE-9815) Recovering persistent members can result in extra copies of a bucket or two copies int the same redundancy zone

     [ https://issues.apache.org/jira/browse/GEODE-9815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Mark Hanson reassigned GEODE-9815:
----------------------------------

    Assignee: Mark Hanson

> Recovering persistent members can result in extra copies of a bucket or two copies int the same redundancy zone
> ---------------------------------------------------------------------------------------------------------------
>
>                 Key: GEODE-9815
>                 URL: https://issues.apache.org/jira/browse/GEODE-9815
>             Project: Geode
>          Issue Type: Bug
>          Components: regions
>    Affects Versions: 1.15.0
>            Reporter: Dan Smith
>            Assignee: Mark Hanson
>            Priority: Major
>              Labels: GeodeOperationAPI, needsTriage
>
> The fix in GEODE-9554 is incomplete for some cases, and it also introduces a new issue when removing buckets that are over redundancy.
> GEODE-9554 and these new issues are all related to using redundancy zones and having persistent members.
> With persistence, when we start up a member with persisted buckets, we always recover the persisted buckets on startup, regardless of whether redundancy is already met or what zone the existing buckets are on. This is necessary to ensure that we can recover all colocated buckets that might be persisted on the member.
> Because recovering these persistent buckets may cause us to go over redundancy, after we recover from disk, we run a "restore redundancy" task that actually removes copies of buckets that are over redundancy.
> GEODE-9554 addressed one case where we end up removing the last copy of a bucket from one redundancy zone while leaving two copies in another redundancy zone. It did so by disallowing the removal of a bucket if it is the last copy in a redundancy zone.
> There are a couple of issues with this approach.
> *Problem 1:* We may end up with two copies of the bucket in one zone in some cases
> With a slight tweak to the scenario fixed with GEODE-9554 we can end up never getting out of the situation where we have two copies of a bucket in the same zone.
> Steps:
> 1. Start two redundancy zones A and B with two members each.  Bucket 0 is on member A1 and B1.
> 2. Shutdown member A1.
> 3. Rebalance - this will create bucket 0 on A2.
> 4. Shutdown B1. Revoke it's disk store and delete the data
> 5. Startup A1 - it will recover bucket 0.
> 6. At this point, bucket 0 is on A1 and A2, and nothing will resolve that situation.
> *Problem 2:* We may never delete extra copies of a bucket
> The fix for GEODE-9554 introduces a new problem if we have more than 2 redundancy zones
> Steps
> 1. Start three redundancy zones A,B,C with one member each. Bucket 0 is on A1 and B1
> 2. Shutdown A1
> 3. Rebalance -  this will create Bucket 0 on C1
> 4. Startup A1 - this will recreate bucket 0
> 5. Now we have bucket 0 on A1, B1, and C1. Nothing will remove the extra copy.
> I think the overall fix is probably to do something different than prevent removing the last copy of a bucket from a redundancy zone. Instead, I think we should do something like this:
> 1. Change PartitionRegionLoadModel.getOverRedundancyBuckets to return *any* buckets that have two copies in the same zone, as well as any buckets that are actually over redundancy.
> 2. Change PartitionRegionLoadModel.findBestRemove to always remove extra copies of a bucket in the same zone first
> 3. Back out the changes for GEODE-9554 and let the last copy be deleted from a zone.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)