You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@geode.apache.org by GitBox <gi...@apache.org> on 2021/04/28 18:11:55 UTC

[GitHub] [geode] kirklund opened a new pull request #6386: DRAFT: GEODE-9195: Remove PR clear local locking

kirklund opened a new pull request #6386:
URL: https://github.com/apache/geode/pull/6386


   Unit test changes in BucketRegion and DistributedRegion.
   
   Unit test most of PartitionedRegionClearMessage.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [geode] onichols-pivotal edited a comment on pull request #6386: [Merge to feature/GEODE-7665] GEODE-9195: Remove PR clear local locking

Posted by GitBox <gi...@apache.org>.
onichols-pivotal edited a comment on pull request #6386:
URL: https://github.com/apache/geode/pull/6386#issuecomment-830255968


   Actually, your feature branch does not *require* codeowners.  You can merge PRs on any branch other than develop with 0 reviews if you want.
   
   I agree the GUI makes this confusing.  Since the last rebase of this branch with develop picked up the CODEOWNERS file, GitHub does show who would be the relevant codeowners for this PR if it were actually for develop (CODEOWNERS is a separate concept from minimum required number of reviewers)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [geode] DonalEvans commented on pull request #6386: [Merge to feature/GEODE-7665] GEODE-9195: Remove PR clear local locking

Posted by GitBox <gi...@apache.org>.
DonalEvans commented on pull request #6386:
URL: https://github.com/apache/geode/pull/6386#issuecomment-830254616


   > Closing PR because I don't want to waste the time of CODEOWNERs to handle a PR that is going to feature/GEODE-7665 instead of develop. I also have lots of conflicts to handle again.
   
   For what it's worth, I don't consider it a waste of time to review commits that are going to a feature branch that will at some point be merged to develop. By reviewing commits as they go into the feature branch, far less work will be needed when it comes time to merge the branch in, as codeowners can be confident that all the changes have been previously reviewed and can be given a thumbs-up without having to re-review the entire colossal commit (which currently would include 114 files changed, 10964 insertions and 1801 deletions).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [geode] kirklund commented on a change in pull request #6386: [Merge to feature/GEODE-7665] GEODE-9195: Remove PR clear local locking

Posted by GitBox <gi...@apache.org>.
kirklund commented on a change in pull request #6386:
URL: https://github.com/apache/geode/pull/6386#discussion_r624025758



##########
File path: geode-core/src/main/java/org/apache/geode/internal/cache/BucketRegion.java
##########
@@ -577,22 +578,38 @@ public void cmnClearRegion(RegionEventImpl regionEvent, boolean cacheWrite, bool
     // get rvvLock
     Set<InternalDistributedMember> participants =
         getCacheDistributionAdvisor().adviseInvalidateRegion();
-    boolean isLockedAlready = this.partitionedRegion.getPartitionedRegionClear()
-        .isLockedForListenerAndClientNotification();
 
     try {
-      obtainWriteLocksForClear(regionEvent, participants, isLockedAlready);
+      obtainWriteLocksForClear(regionEvent, participants);
       // no need to dominate my own rvv.
       // Clear is on going here, there won't be GII for this member
       clearRegionLocally(regionEvent, cacheWrite, null);
       distributeClearOperation(regionEvent, null, participants);
 
       // TODO: call reindexUserDataRegion if there're lucene indexes
     } finally {
-      releaseWriteLocksForClear(regionEvent, participants, isLockedAlready);
+      releaseWriteLocksForClear(regionEvent, participants);
     }
   }
 
+  @Override
+  protected void obtainWriteLocksForClear(RegionEventImpl regionEvent,
+      Set<InternalDistributedMember> participants) {
+    lockAndFlushClearToOthers(regionEvent, participants);
+  }
+
+  @Override
+  protected void releaseWriteLocksForClear(RegionEventImpl regionEvent,
+      Set<InternalDistributedMember> participants) {
+    distributedClearOperationReleaseLocks(regionEvent, participants);
+  }
+
+  @VisibleForTesting
+  void distributedClearOperationReleaseLocks(RegionEventImpl regionEvent,

Review comment:
       I'm trying to clean up code on feature/GEODE-7665 and write unit tests where we have none yet. The team wants to review code in PR form before merging to the feature branch which unfortunately pulls in CODEOWNERS and requires CO approvals. This feature branch is still wip and is not going to develop.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [geode] DonalEvans commented on a change in pull request #6386: [Merge to feature/GEODE-7665] GEODE-9195: Remove PR clear local locking

Posted by GitBox <gi...@apache.org>.
DonalEvans commented on a change in pull request #6386:
URL: https://github.com/apache/geode/pull/6386#discussion_r623372178



##########
File path: geode-core/src/test/java/org/apache/geode/internal/cache/PartitionedRegionClearMessageTest.java
##########
@@ -0,0 +1,285 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.internal.cache;
+
+import static java.util.Collections.emptySet;
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.catchThrowable;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.ArgumentMatchers.anyBoolean;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+import java.util.Collection;
+
+import org.junit.Before;
+import org.junit.Test;
+
+import org.apache.geode.cache.Operation;
+import org.apache.geode.cache.RegionEvent;
+import org.apache.geode.distributed.DistributedMember;
+import org.apache.geode.distributed.internal.ClusterDistributionManager;
+import org.apache.geode.distributed.internal.DistributionAdvisor;
+import org.apache.geode.distributed.internal.DistributionManager;
+import org.apache.geode.distributed.internal.ReplyProcessor21;
+import org.apache.geode.distributed.internal.membership.InternalDistributedMember;
+import org.apache.geode.internal.cache.PartitionedRegionClearMessage.OperationType;
+
+public class PartitionedRegionClearMessageTest {
+
+  private Collection<InternalDistributedMember> recipients;
+  private DistributionManager distributionManager;
+  private PartitionedRegion partitionedRegion;
+  private ReplyProcessor21 replyProcessor21;
+  private Object callbackArgument;
+  private EventID eventId;
+  private RegionEventFactory regionEventFactory;
+
+  @Before
+  public void setUp() {
+    recipients = emptySet();
+    distributionManager = mock(DistributionManager.class);
+    partitionedRegion = mock(PartitionedRegion.class);
+    replyProcessor21 = mock(ReplyProcessor21.class);
+    callbackArgument = new Object();
+    eventId = mock(EventID.class);
+    regionEventFactory = mock(RegionEventFactory.class);
+  }
+
+  @Test
+  public void construction_throwsNullPointerExceptionIfRecipientsIsNull() {
+    Throwable thrown = catchThrowable(() -> {
+      new PartitionedRegionClearMessage(null, distributionManager, 1,
+          replyProcessor21, OperationType.OP_PR_CLEAR, callbackArgument, eventId, false,
+          regionEventFactory);
+    });
+
+    assertThat(thrown).isInstanceOf(NullPointerException.class);
+  }
+
+  @Test
+  public void construction_findsAllDependencies() {
+    boolean isTransactionDistributed = true;
+    int regionId = 10;
+    InternalCache cache = mock(InternalCache.class);
+    RegionEventImpl regionEvent = mock(RegionEventImpl.class);
+    TXManagerImpl txManager = mock(TXManagerImpl.class);
+    when(cache.getTxManager()).thenReturn(txManager);
+    when(partitionedRegion.getCache()).thenReturn(cache);
+    when(partitionedRegion.getDistributionManager()).thenReturn(distributionManager);
+    when(partitionedRegion.getPRId()).thenReturn(regionId);
+    when(regionEvent.getEventId()).thenReturn(eventId);
+    when(regionEvent.getRawCallbackArgument()).thenReturn(callbackArgument);
+    when(txManager.isDistributed()).thenReturn(isTransactionDistributed);
+
+    PartitionedRegionClearMessage message = new PartitionedRegionClearMessage(recipients,
+        partitionedRegion,
+        replyProcessor21,
+        OperationType.OP_PR_CLEAR,
+        regionEvent);
+
+    assertThat(message.getDistributionManagerForTesting()).isSameAs(distributionManager);
+    assertThat(message.getCallbackArgumentForTesting()).isSameAs(callbackArgument);
+    assertThat(message.getRegionId()).isEqualTo(regionId);
+    assertThat(message.getEventID()).isEqualTo(eventId);
+    assertThat(message.isTransactionDistributed()).isEqualTo(isTransactionDistributed);
+
+    RegionEventFactory regionEventFactory = message.getRegionEventFactoryForTesting();
+    RegionEvent<?, ?> created =
+        regionEventFactory.create(partitionedRegion, Operation.DESTROY, callbackArgument, false,
+            mock(DistributedMember.class), mock(EventID.class));
+    assertThat(created).isInstanceOf(RegionEventImpl.class);
+  }
+
+  @Test
+  public void construction_setsTransactionDistributed() {

Review comment:
       This test case seems to be covered in the test above it. Is it necessary to have a seperate test for it here too?

##########
File path: geode-core/src/test/java/org/apache/geode/internal/cache/PartitionedRegionClearMessageTest.java
##########
@@ -0,0 +1,285 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.internal.cache;
+
+import static java.util.Collections.emptySet;
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.catchThrowable;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.ArgumentMatchers.anyBoolean;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+import java.util.Collection;
+
+import org.junit.Before;
+import org.junit.Test;
+
+import org.apache.geode.cache.Operation;
+import org.apache.geode.cache.RegionEvent;
+import org.apache.geode.distributed.DistributedMember;
+import org.apache.geode.distributed.internal.ClusterDistributionManager;
+import org.apache.geode.distributed.internal.DistributionAdvisor;
+import org.apache.geode.distributed.internal.DistributionManager;
+import org.apache.geode.distributed.internal.ReplyProcessor21;
+import org.apache.geode.distributed.internal.membership.InternalDistributedMember;
+import org.apache.geode.internal.cache.PartitionedRegionClearMessage.OperationType;
+
+public class PartitionedRegionClearMessageTest {
+
+  private Collection<InternalDistributedMember> recipients;
+  private DistributionManager distributionManager;
+  private PartitionedRegion partitionedRegion;
+  private ReplyProcessor21 replyProcessor21;
+  private Object callbackArgument;
+  private EventID eventId;
+  private RegionEventFactory regionEventFactory;
+
+  @Before
+  public void setUp() {
+    recipients = emptySet();
+    distributionManager = mock(DistributionManager.class);
+    partitionedRegion = mock(PartitionedRegion.class);
+    replyProcessor21 = mock(ReplyProcessor21.class);
+    callbackArgument = new Object();
+    eventId = mock(EventID.class);
+    regionEventFactory = mock(RegionEventFactory.class);
+  }
+
+  @Test
+  public void construction_throwsNullPointerExceptionIfRecipientsIsNull() {
+    Throwable thrown = catchThrowable(() -> {
+      new PartitionedRegionClearMessage(null, distributionManager, 1,
+          replyProcessor21, OperationType.OP_PR_CLEAR, callbackArgument, eventId, false,
+          regionEventFactory);
+    });
+
+    assertThat(thrown).isInstanceOf(NullPointerException.class);
+  }
+
+  @Test
+  public void construction_findsAllDependencies() {
+    boolean isTransactionDistributed = true;
+    int regionId = 10;
+    InternalCache cache = mock(InternalCache.class);
+    RegionEventImpl regionEvent = mock(RegionEventImpl.class);
+    TXManagerImpl txManager = mock(TXManagerImpl.class);
+    when(cache.getTxManager()).thenReturn(txManager);
+    when(partitionedRegion.getCache()).thenReturn(cache);
+    when(partitionedRegion.getDistributionManager()).thenReturn(distributionManager);
+    when(partitionedRegion.getPRId()).thenReturn(regionId);
+    when(regionEvent.getEventId()).thenReturn(eventId);
+    when(regionEvent.getRawCallbackArgument()).thenReturn(callbackArgument);
+    when(txManager.isDistributed()).thenReturn(isTransactionDistributed);
+
+    PartitionedRegionClearMessage message = new PartitionedRegionClearMessage(recipients,
+        partitionedRegion,
+        replyProcessor21,
+        OperationType.OP_PR_CLEAR,
+        regionEvent);
+
+    assertThat(message.getDistributionManagerForTesting()).isSameAs(distributionManager);
+    assertThat(message.getCallbackArgumentForTesting()).isSameAs(callbackArgument);
+    assertThat(message.getRegionId()).isEqualTo(regionId);
+    assertThat(message.getEventID()).isEqualTo(eventId);
+    assertThat(message.isTransactionDistributed()).isEqualTo(isTransactionDistributed);
+
+    RegionEventFactory regionEventFactory = message.getRegionEventFactoryForTesting();
+    RegionEvent<?, ?> created =
+        regionEventFactory.create(partitionedRegion, Operation.DESTROY, callbackArgument, false,
+            mock(DistributedMember.class), mock(EventID.class));
+    assertThat(created).isInstanceOf(RegionEventImpl.class);
+  }
+
+  @Test
+  public void construction_setsTransactionDistributed() {
+    boolean isTransactionDistributed = true;
+    PartitionedRegionClearMessage message =
+        new PartitionedRegionClearMessage(recipients, distributionManager, 1,
+            replyProcessor21, OperationType.OP_PR_CLEAR, callbackArgument, eventId,
+            isTransactionDistributed, regionEventFactory);
+
+    boolean value = message.isTransactionDistributed();
+
+    assertThat(value).isEqualTo(isTransactionDistributed);
+  }
+
+  @Test
+  public void getEventID_returnsTheEventId() {

Review comment:
       This test case seem to be covered in `construction_findsAllDependencies()`. Is it necessary to test it here too?

##########
File path: geode-core/src/main/java/org/apache/geode/internal/cache/DistributedRegion.java
##########
@@ -2082,30 +2082,31 @@ private void distributedUnlockForClear() {
     }
   }
 
-
   /**
    * obtain locks preventing generation of new versions in other members
    */
   protected void obtainWriteLocksForClear(RegionEventImpl regionEvent,
-      Set<InternalDistributedMember> participants, boolean localLockedAlready) {
-    if (!localLockedAlready) {
-      lockLocallyForClear(getDistributionManager(), getMyId(), regionEvent);
-    }
-    lockAndFlushClearToOthers(regionEvent, participants);
+      Set<InternalDistributedMember> recipients) {
+    lockLocallyForClear(getDistributionManager(), getMyId(), regionEvent);

Review comment:
       It might just be me misunderstanding the ticket description, but it made it seem like this call would be removed, rather than just the conditional around it. Why is the local locking still present here?

##########
File path: geode-core/src/test/java/org/apache/geode/internal/cache/BucketRegionJUnitTest.java
##########
@@ -211,4 +204,48 @@ public void updateSizeToZeroOnClearBucketRegion() {
     long sizeAfterClear = region.getTotalBytes();
     assertEquals(0, sizeAfterClear);
   }
+
+  @Test
+  public void obtainWriteLocksForClearInBRShouldLockAndFlushToOthers() {

Review comment:
       It's unclear to me what this test case is testing that's different from the subsequent test case. Both appear to be verifying that `lockAndFlushClearToOthers()` is called when `obtainWriteLocksForClear()` is called on the BucketRegion.

##########
File path: geode-core/src/main/java/org/apache/geode/internal/cache/BucketRegion.java
##########
@@ -577,22 +578,38 @@ public void cmnClearRegion(RegionEventImpl regionEvent, boolean cacheWrite, bool
     // get rvvLock
     Set<InternalDistributedMember> participants =
         getCacheDistributionAdvisor().adviseInvalidateRegion();
-    boolean isLockedAlready = this.partitionedRegion.getPartitionedRegionClear()
-        .isLockedForListenerAndClientNotification();
 
     try {
-      obtainWriteLocksForClear(regionEvent, participants, isLockedAlready);
+      obtainWriteLocksForClear(regionEvent, participants);
       // no need to dominate my own rvv.
       // Clear is on going here, there won't be GII for this member
       clearRegionLocally(regionEvent, cacheWrite, null);
       distributeClearOperation(regionEvent, null, participants);
 
       // TODO: call reindexUserDataRegion if there're lucene indexes
     } finally {
-      releaseWriteLocksForClear(regionEvent, participants, isLockedAlready);
+      releaseWriteLocksForClear(regionEvent, participants);
     }
   }
 
+  @Override
+  protected void obtainWriteLocksForClear(RegionEventImpl regionEvent,
+      Set<InternalDistributedMember> participants) {
+    lockAndFlushClearToOthers(regionEvent, participants);
+  }
+
+  @Override
+  protected void releaseWriteLocksForClear(RegionEventImpl regionEvent,
+      Set<InternalDistributedMember> participants) {
+    distributedClearOperationReleaseLocks(regionEvent, participants);
+  }
+
+  @VisibleForTesting
+  void distributedClearOperationReleaseLocks(RegionEventImpl regionEvent,

Review comment:
       I'm a little unclear on why this method exists in both `BucketRegion` and `DistributedRegion` with identical implementation, but `lockAndFlushClearToOthers()` exists only in `DistributedRegion`. Also, in terms of naming consistency, could this method either be renamed "releaseLocks" or could `lockAndFlushClearToOthers()` be renamed "distributedClearOperationLockAndFlushClearToOthers"?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [geode] kirklund closed pull request #6386: [Merge to feature/GEODE-7665] GEODE-9195: Remove PR clear local locking

Posted by GitBox <gi...@apache.org>.
kirklund closed pull request #6386:
URL: https://github.com/apache/geode/pull/6386


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [geode] onichols-pivotal commented on pull request #6386: [Merge to feature/GEODE-7665] GEODE-9195: Remove PR clear local locking

Posted by GitBox <gi...@apache.org>.
onichols-pivotal commented on pull request #6386:
URL: https://github.com/apache/geode/pull/6386#issuecomment-830255968


   Actually, your feature branch does not *require* codeowners.  You can merge PRs on any branch other than develop with 0 reviews if you want.
   
   However, since your last rebase with develop picked up the CODEOWNERS file, GitHub does show who would be the relevant codeowners for this PR if it were actually for develop.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [geode] kirklund commented on pull request #6386: [Merge to feature/GEODE-7665] GEODE-9195: Remove PR clear local locking

Posted by GitBox <gi...@apache.org>.
kirklund commented on pull request #6386:
URL: https://github.com/apache/geode/pull/6386#issuecomment-830234195


   Closing PR because I don't want to waste the time of CODEOWNERs to handle a PR that is going to feature/GEODE-7665 instead of develop. I also have lots of conflicts to handle again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org