You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@geode.apache.org by zh...@apache.org on 2017/05/03 20:53:15 UTC

[01/10] geode git commit: GEODE-2843 User Guide - example should specify [Forced Update!]

Repository: geode
Updated Branches:
  refs/heads/feature/GEM-1353 08ba1244d -> d4ece31fa (forced update)


GEODE-2843 User Guide - example should specify <client-cache>


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/dd246bd5
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/dd246bd5
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/dd246bd5

Branch: refs/heads/feature/GEM-1353
Commit: dd246bd58a0897eb65c2f55fdd60b2a6ed66e9b7
Parents: c2e7d1f
Author: Dave Barnes <db...@pivotal.io>
Authored: Mon May 1 16:34:36 2017 -0700
Committer: Dave Barnes <db...@pivotal.io>
Committed: Mon May 1 16:34:36 2017 -0700

----------------------------------------------------------------------
 .../reference/topics/client-cache.html.md.erb   | 34 +++++++++++---------
 1 file changed, 18 insertions(+), 16 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/dd246bd5/geode-docs/reference/topics/client-cache.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/reference/topics/client-cache.html.md.erb b/geode-docs/reference/topics/client-cache.html.md.erb
index 6b34132..99e5b39 100644
--- a/geode-docs/reference/topics/client-cache.html.md.erb
+++ b/geode-docs/reference/topics/client-cache.html.md.erb
@@ -84,7 +84,7 @@ Specifies a transaction listener.
 **Example:**
 
 ``` pre
-<cache search-timeout="60">
+<client-cache search-timeout="60">
    <cache-transaction-manager>
      <transaction-listener>
        <class-name>com.company.data.MyTransactionListener</class-name>
@@ -101,7 +101,7 @@ Specifies a transaction listener.
        <parameter>
      </transaction-writer>
    </cache-transaction-manager> .. .
-</cache>
+</client-cache>
 ```
 
 ## <a id="cc-transaction-listener" class="no-quick-link"></a>&lt;transaction-listener&gt;
@@ -494,7 +494,7 @@ Specifies the configuration for the Portable Data eXchange (PDX) method of seria
 **Example:**
 
 ``` pre
-<cache>
+<client-cache>
   <pdx persistent="true" disk-store-name="myDiskStore">
     <pdx-serializer>
       <class-name>
@@ -506,7 +506,7 @@ Specifies the configuration for the Portable Data eXchange (PDX) method of seria
   </pdx-serializer>
  </pdx>
   ...
-</cache>
+</client-cache>
 ```
 
 ## <a id="cc-pdx-serializer" class="no-quick-link"></a>&lt;pdx-serializer&gt;
@@ -522,14 +522,14 @@ Specify the Java class and its initialization parameters with the `<class-name>`
 **Example:**
 
 ``` pre
-<cache>
+<client-cache>
   <pdx>
     <pdx-serializer>
      <class-name>com.company.ExamplePdxSerializer</class-name>
     </pdx-serializer>
   </pdx> 
   ...
-</cache>
+</client-cache>
 ```
 
 ## <a id="cc-region-attributes" class="no-quick-link"></a>&lt;region-attributes&gt;
@@ -2519,14 +2519,16 @@ Configures the behavior of the function execution service.
 **Example:**
 
 ``` pre
-<cache>
-    ...
+<client-cache>
+  ...
     </region>
-<function-service>
-  <function>
-    <class-name>com.myCompany.tradeService.cache.func.TradeCalc</class-name>
-  </function>
-</function-service>
+  <function-service>
+    <function>
+      <class-name>com.myCompany.tradeService.cache.func.TradeCalc</class-name>
+    </function>
+  </function-service>
+  ...
+</client-cache>
 ```
 
 ## <a id="cc-function" class="no-quick-link"></a>&lt;function&gt;
@@ -2610,13 +2612,13 @@ A memory monitor that tracks cache size as a percentage of total heap or off-hea
 **Example:**
 
 ``` pre
-<cache>
+<client-cache>
 ...
    <resource-manager 
       critical-heap-percentage="99.9" 
       eviction-heap=-percentage="85"/>
 ...
-</cache>
+</client-cache>
 ```
 
 ## <a id="cc-serialization-registration" class="no-quick-link"></a>&lt;serialization-registration&gt;
@@ -2649,7 +2651,7 @@ Specify the Java class and its initialization parameters with the `<class-name>`
 
 **API:** `DataSerializable`
 
-You can also directly specify `<instantiator>` as a sub-element of `<cache>`. Use the `org.apache.geode.Instantiator` API to register a `DataSerializable` implementation as the serialization framework for the cache. The following table lists the attribute that can be specified for an `<instantiator>`.
+You can also directly specify `<instantiator>` as a sub-element of `<client-cache>`. Use the `org.apache.geode.Instantiator` API to register a `DataSerializable` implementation as the serialization framework for the cache. The following table lists the attribute that can be specified for an `<instantiator>`.
 
 <a id="cc-instantiator__d93e6596"></a>
 


[04/10] geode git commit: GEODE-2825: Lucene query function will wait until index is returned if an index is defined

Posted by zh...@apache.org.
GEODE-2825: Lucene query function will wait until index is returned if an index is defined

  * If an index is in a defined state but not yet created, the query will now wait
    until the index is created or no longer defined.  Instead of throwing an
    exception and possibly getting a stack overflow


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/ecbf5576
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/ecbf5576
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/ecbf5576

Branch: refs/heads/feature/GEM-1353
Commit: ecbf55769c1aa04b173e39f719565e03820ac8f2
Parents: b81ebcb
Author: Jason Huynh <hu...@gmail.com>
Authored: Tue May 2 16:31:59 2017 -0700
Committer: Jason Huynh <hu...@gmail.com>
Committed: Tue May 2 16:31:59 2017 -0700

----------------------------------------------------------------------
 .../distributed/LuceneQueryFunction.java        | 24 +++++-------
 .../LuceneQueryFunctionJUnitTest.java           | 41 ++++++++++++++++----
 2 files changed, 44 insertions(+), 21 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/ecbf5576/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/LuceneQueryFunction.java
----------------------------------------------------------------------
diff --git a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/LuceneQueryFunction.java b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/LuceneQueryFunction.java
index 9ad69ac..b60652f 100644
--- a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/LuceneQueryFunction.java
+++ b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/LuceneQueryFunction.java
@@ -133,29 +133,24 @@ public class LuceneQueryFunction implements Function, InternalEntity {
     try {
       index =
           (LuceneIndexImpl) service.getIndex(searchContext.getIndexName(), region.getFullPath());
-      if (index == null && service instanceof LuceneServiceImpl) {
-        if (((LuceneServiceImpl) service).getDefinedIndex(searchContext.getIndexName(),
-            region.getFullPath()) != null) {
-          // The node may be in the process of recovering, where we have the index defined but yet
-          // to be recovered
-          // If we retry fast enough, we could get a stack overflow based on the way function
-          // execution is currently written
-          // Instead we will add an artificial sleep to slow down the retry at this point
-          // Hopefully in the future, the function execution would retry without adding to the stack
-          // and this can be removed
+      if (index == null) {
+        while (service instanceof LuceneServiceImpl && (((LuceneServiceImpl) service)
+            .getDefinedIndex(searchContext.getIndexName(), region.getFullPath()) != null)) {
           try {
-            Thread.sleep(1000);
+            Thread.sleep(10);
           } catch (InterruptedException e) {
-            Thread.currentThread().interrupt();
+            return null;
           }
-          throw new InternalFunctionInvocationTargetException(
-              "Defined Lucene Index has not been created");
+          region.getCache().getCancelCriterion().checkCancelInProgress(null);
         }
+        index =
+            (LuceneIndexImpl) service.getIndex(searchContext.getIndexName(), region.getFullPath());
       }
     } catch (CacheClosedException e) {
       throw new InternalFunctionInvocationTargetException(
           "Cache is closed when attempting to retrieve index:" + region.getFullPath(), e);
     }
+
     return index;
   }
 
@@ -181,3 +176,4 @@ public class LuceneQueryFunction implements Function, InternalEntity {
     return true;
   }
 }
+

http://git-wip-us.apache.org/repos/asf/geode/blob/ecbf5576/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/distributed/LuceneQueryFunctionJUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/distributed/LuceneQueryFunctionJUnitTest.java b/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/distributed/LuceneQueryFunctionJUnitTest.java
index 6690850..737805b 100644
--- a/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/distributed/LuceneQueryFunctionJUnitTest.java
+++ b/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/distributed/LuceneQueryFunctionJUnitTest.java
@@ -23,6 +23,7 @@ import java.util.ArrayList;
 import java.util.Collection;
 import java.util.List;
 
+import org.apache.geode.CancelCriterion;
 import org.apache.geode.cache.CacheClosedException;
 import org.apache.geode.cache.Region;
 import org.apache.geode.cache.execute.FunctionException;
@@ -54,6 +55,8 @@ import org.junit.Test;
 import org.junit.experimental.categories.Category;
 import org.mockito.ArgumentCaptor;
 import org.mockito.Mockito;
+import org.mockito.invocation.InvocationOnMock;
+import org.mockito.stubbing.Answer;
 
 @Category(UnitTest.class)
 public class LuceneQueryFunctionJUnitTest {
@@ -224,19 +227,42 @@ public class LuceneQueryFunctionJUnitTest {
     function.execute(mockContext);
   }
 
-  @Test(expected = InternalFunctionInvocationTargetException.class)
-  public void whenServiceReturnsNullIndexButHasDefinedLuceneIndexDuringQueryExecutionInternalFunctionExceptionShouldBeThrown()
+  @Test
+  public void whenServiceReturnsNullIndexButHasDefinedLuceneIndexDuringQueryExecutionShouldBlockUntilAvailable()
       throws Exception {
     LuceneServiceImpl mockServiceImpl = mock(LuceneServiceImpl.class);
     when(mockCache.getService(any())).thenReturn(mockServiceImpl);
-    when(mockServiceImpl.getIndex(eq("indexName"), eq(regionPath))).thenReturn(null);
-    when(mockServiceImpl.getDefinedIndex(eq("indexName"), eq(regionPath)))
-        .thenReturn(mock(LuceneIndexCreationProfile.class));
+    when(mockServiceImpl.getIndex(eq("indexName"), eq(regionPath))).thenAnswer(new Answer() {
+      private boolean calledFirstTime = false;
+
+      @Override
+      public Object answer(final InvocationOnMock invocation) throws Throwable {
+        if (calledFirstTime == false) {
+          calledFirstTime = true;
+          return null;
+        } else {
+          return mockIndex;
+        }
+      }
+    });
+    when(mockServiceImpl.getDefinedIndex(eq("indexName"), eq(regionPath))).thenAnswer(new Answer() {
+      private int count = 10;
+
+      @Override
+      public Object answer(final InvocationOnMock invocation) throws Throwable {
+        if (count-- > 0) {
+          return mock(LuceneIndexCreationProfile.class);
+        }
+        return null;
+      }
+    });
     when(mockContext.getDataSet()).thenReturn(mockRegion);
     when(mockContext.getArguments()).thenReturn(searchArgs);
-
+    when(mockContext.<TopEntriesCollector>getResultSender()).thenReturn(mockResultSender);
+    CancelCriterion mockCancelCriterion = mock(CancelCriterion.class);
+    when(mockCache.getCancelCriterion()).thenReturn(mockCancelCriterion);
+    when(mockCancelCriterion.isCancelInProgress()).thenReturn(false);
     LuceneQueryFunction function = new LuceneQueryFunction();
-    when(mockService.getIndex(eq("indexName"), eq(regionPath))).thenReturn(null);
     function.execute(mockContext);
   }
 
@@ -336,3 +362,4 @@ public class LuceneQueryFunctionJUnitTest {
     query = queryProvider.getQuery(mockIndex);
   }
 }
+


[05/10] geode git commit: GEODE-2830 Required permission for executing a function should be DATA:WRITE This closes #485

Posted by zh...@apache.org.
GEODE-2830 Required permission for executing a function should be DATA:WRITE
This closes #485


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/0cb9e7a0
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/0cb9e7a0
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/0cb9e7a0

Branch: refs/heads/feature/GEM-1353
Commit: 0cb9e7a0227eabd90234e18e91a906e0e519e624
Parents: ecbf557
Author: Dave Barnes <db...@pivotal.io>
Authored: Mon May 1 16:49:21 2017 -0700
Committer: Dave Barnes <db...@pivotal.io>
Committed: Tue May 2 17:05:44 2017 -0700

----------------------------------------------------------------------
 .../managing/security/implementing_authorization.html.md.erb       | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/0cb9e7a0/geode-docs/managing/security/implementing_authorization.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/security/implementing_authorization.html.md.erb b/geode-docs/managing/security/implementing_authorization.html.md.erb
index a01feda..f897e4c 100644
--- a/geode-docs/managing/security/implementing_authorization.html.md.erb
+++ b/geode-docs/managing/security/implementing_authorization.html.md.erb
@@ -133,7 +133,7 @@ This table classifies the permissions assigned for `gfsh` operations.
 | disconnect                             | DATA:MANAGE                      |
 | echo                                   | DATA:MANAGE                      |
 | encrypt password                       | DATA:MANAGE                      |
-| execute function                       | DATA:MANAGE                      |
+| execute function                       | DATA:WRITE                       |
 | export cluster-configuration           | CLUSTER:READ                     |
 | export config                          | CLUSTER:READ                     |
 | export data                            | CLUSTER:READ                     |


[02/10] geode git commit: User Guide typo repair (no JIRA ticket) secuirty => security

Posted by zh...@apache.org.
User Guide typo repair (no JIRA ticket) secuirty => security


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/dc6b6009
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/dc6b6009
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/dc6b6009

Branch: refs/heads/feature/GEM-1353
Commit: dc6b60096e1faa6744c96a38d1f2687384db4ab2
Parents: c2e7d1f
Author: Dave Barnes <db...@pivotal.io>
Authored: Mon May 1 17:15:07 2017 -0700
Committer: Dave Barnes <db...@pivotal.io>
Committed: Mon May 1 17:15:07 2017 -0700

----------------------------------------------------------------------
 .../basic_config/the_cache/managing_a_secure_cache.html.md.erb     | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/dc6b6009/geode-docs/basic_config/the_cache/managing_a_secure_cache.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/basic_config/the_cache/managing_a_secure_cache.html.md.erb b/geode-docs/basic_config/the_cache/managing_a_secure_cache.html.md.erb
index 94e3e52..1f39d94 100644
--- a/geode-docs/basic_config/the_cache/managing_a_secure_cache.html.md.erb
+++ b/geode-docs/basic_config/the_cache/managing_a_secure_cache.html.md.erb
@@ -56,7 +56,7 @@ These steps demonstrate a programmatic cache creation.
             ```
 
             **Note:**
-            Properties passed to a cache creation method override any settings in either the `gemfire.properties` file or `gfsecuirty.properties`.
+            Properties passed to a cache creation method override any settings in either the `gemfire.properties` file or `gfsecurity.properties`.
 
 2.  Close your cache when you are done, using the `close` method of the `ClientCache` instance or the inherited `close` method of the `Cache` instance. Example:
 


[07/10] geode git commit: GEODE-2828: AEQ created before the Lucene user regions

Posted by zh...@apache.org.
GEODE-2828: AEQ created before the Lucene user regions

	* AEQ is being created before the Lucene user region
	* A countdown latch prevents the index repository computation until the user regions are ready
	* Integration tests do not use a Dummy executor because we need a thread pool for afterPrimary call.

	This closes #481


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/480a1e05
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/480a1e05
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/480a1e05

Branch: refs/heads/feature/GEM-1353
Commit: 480a1e05cd5bc332c1e5e2593c3468f640ded1c0
Parents: a6832ee
Author: nabarun <nn...@pivotal.io>
Authored: Tue May 2 15:02:23 2017 -0700
Committer: nabarun <nn...@pivotal.io>
Committed: Tue May 2 21:53:46 2017 -0700

----------------------------------------------------------------------
 .../internal/LonerDistributionManager.java      | 11 ++++-
 .../internal/offheap/OffHeapRegionBase.java     | 31 ++++++++++++--
 .../AbstractPartitionedRepositoryManager.java   | 18 +++++++-
 .../lucene/internal/LuceneBucketListener.java   |  4 +-
 .../lucene/internal/LuceneEventListener.java    |  4 --
 .../LuceneIndexForPartitionedRegion.java        | 18 ++++----
 .../cache/lucene/internal/LuceneIndexImpl.java  | 44 ++++++++++++--------
 .../cache/lucene/internal/LuceneRawIndex.java   | 10 ++++-
 .../lucene/internal/LuceneRegionListener.java   | 14 ++++++-
 .../lucene/internal/LuceneServiceImpl.java      | 34 +++++++--------
 .../internal/LuceneEventListenerJUnitTest.java  |  4 --
 .../lucene/internal/LuceneIndexFactorySpy.java  | 18 --------
 .../LuceneIndexForPartitionedRegionTest.java    | 22 ++++++----
 .../PartitionedRepositoryManagerJUnitTest.java  |  2 +
 .../RawLuceneRepositoryManagerJUnitTest.java    |  2 +
 15 files changed, 143 insertions(+), 93 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/480a1e05/geode-core/src/main/java/org/apache/geode/distributed/internal/LonerDistributionManager.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/distributed/internal/LonerDistributionManager.java b/geode-core/src/main/java/org/apache/geode/distributed/internal/LonerDistributionManager.java
index e9068e6..fdb6a13 100644
--- a/geode-core/src/main/java/org/apache/geode/distributed/internal/LonerDistributionManager.java
+++ b/geode-core/src/main/java/org/apache/geode/distributed/internal/LonerDistributionManager.java
@@ -71,7 +71,14 @@ public class LonerDistributionManager implements DM {
     // no threads needed
   }
 
-  protected void shutdown() {}
+  protected void shutdown() {
+    executor.shutdown();
+    try {
+      executor.awaitTermination(20, TimeUnit.SECONDS);
+    } catch (InterruptedException e) {
+      throw new InternalGemFireError("Interrupted while waiting for DM shutdown");
+    }
+  }
 
   private final InternalDistributedMember id;
 
@@ -94,7 +101,7 @@ public class LonerDistributionManager implements DM {
   private ConcurrentMap<InternalDistributedMember, InternalDistributedMember> canonicalIds =
       new ConcurrentHashMap();
   static private final DummyDMStats stats = new DummyDMStats();
-  static private final DummyExecutor executor = new DummyExecutor();
+  static private final ExecutorService executor = Executors.newCachedThreadPool();
 
   @Override
   public long cacheTimeMillis() {

http://git-wip-us.apache.org/repos/asf/geode/blob/480a1e05/geode-core/src/test/java/org/apache/geode/internal/offheap/OffHeapRegionBase.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/internal/offheap/OffHeapRegionBase.java b/geode-core/src/test/java/org/apache/geode/internal/offheap/OffHeapRegionBase.java
index 62766cc..c0c6085 100644
--- a/geode-core/src/test/java/org/apache/geode/internal/offheap/OffHeapRegionBase.java
+++ b/geode-core/src/test/java/org/apache/geode/internal/offheap/OffHeapRegionBase.java
@@ -31,12 +31,17 @@ import org.apache.geode.pdx.PdxReader;
 import org.apache.geode.pdx.PdxSerializable;
 import org.apache.geode.pdx.PdxWriter;
 import org.apache.geode.test.dunit.WaitCriterion;
+import org.awaitility.Awaitility;
+import org.junit.After;
 import org.junit.Test;
 
+import java.io.File;
+import java.io.FilenameFilter;
 import java.io.Serializable;
 import java.util.Arrays;
 import java.util.List;
 import java.util.Properties;
+import java.util.concurrent.TimeUnit;
 
 import static org.apache.geode.distributed.ConfigurationProperties.LOCATORS;
 import static org.apache.geode.distributed.ConfigurationProperties.MCAST_PORT;
@@ -72,6 +77,22 @@ public abstract class OffHeapRegionBase {
     return result;
   }
 
+  @After
+  public void cleanUp() {
+    File dir = new File(".");
+    File[] files = dir.listFiles(new FilenameFilter() {
+
+      @Override
+      public boolean accept(File dir, String name) {
+        return name.startsWith("BACKUP");
+      }
+
+    });
+    for (File file : files) {
+      file.delete();
+    }
+  }
+
   private void closeCache(GemFireCacheImpl gfc, boolean keepOffHeapAllocated) {
     gfc.close();
     if (!keepOffHeapAllocated) {
@@ -200,7 +221,8 @@ public abstract class OffHeapRegionBase {
       gfc.setCopyOnRead(true);
       final MemoryAllocator ma = gfc.getOffHeapStore();
       assertNotNull(ma);
-      assertEquals(0, ma.getUsedMemory());
+      Awaitility.await().atMost(60, TimeUnit.SECONDS)
+          .until(() -> assertEquals(0, ma.getUsedMemory()));
       Compressor compressor = null;
       if (compressed) {
         compressor = SnappyCompressor.getDefaultInstance();
@@ -413,7 +435,8 @@ public abstract class OffHeapRegionBase {
       assertTrue(ma.getUsedMemory() > 0);
       try {
         r.clear();
-        assertEquals(0, ma.getUsedMemory());
+        Awaitility.await().atMost(60, TimeUnit.SECONDS)
+            .until(() -> assertEquals(0, ma.getUsedMemory()));
       } catch (UnsupportedOperationException ok) {
       }
 
@@ -449,8 +472,8 @@ public abstract class OffHeapRegionBase {
       }
 
       r.destroyRegion();
-      assertEquals(0, ma.getUsedMemory());
-
+      Awaitility.await().atMost(60, TimeUnit.SECONDS)
+          .until(() -> assertEquals(0, ma.getUsedMemory()));
     } finally {
       if (r != null && !r.isDestroyed()) {
         r.destroyRegion();

http://git-wip-us.apache.org/repos/asf/geode/blob/480a1e05/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/AbstractPartitionedRepositoryManager.java
----------------------------------------------------------------------
diff --git a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/AbstractPartitionedRepositoryManager.java b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/AbstractPartitionedRepositoryManager.java
index 26bb488..867794d 100755
--- a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/AbstractPartitionedRepositoryManager.java
+++ b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/AbstractPartitionedRepositoryManager.java
@@ -19,6 +19,7 @@ import java.util.ArrayList;
 import java.util.Collection;
 import java.util.Set;
 import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.CountDownLatch;
 
 import org.apache.geode.InternalGemFireError;
 import org.apache.geode.cache.Region;
@@ -47,18 +48,22 @@ public abstract class AbstractPartitionedRepositoryManager implements Repository
       new ConcurrentHashMap<Integer, IndexRepository>();
 
   /** The user region for this index */
-  protected final PartitionedRegion userRegion;
+  protected PartitionedRegion userRegion = null;
   protected final LuceneSerializer serializer;
   protected final LuceneIndexImpl index;
   protected volatile boolean closed;
+  final private CountDownLatch isDataRegionReady = new CountDownLatch(1);
 
   public AbstractPartitionedRepositoryManager(LuceneIndexImpl index, LuceneSerializer serializer) {
     this.index = index;
-    this.userRegion = (PartitionedRegion) index.getCache().getRegion(index.getRegionPath());
     this.serializer = serializer;
     this.closed = false;
   }
 
+  public void setUserRegionForRepositoryManager() {
+    this.userRegion = (PartitionedRegion) index.getCache().getRegion(index.getRegionPath());
+  }
+
   @Override
   public IndexRepository getRepository(Region region, Object key, Object callbackArg)
       throws BucketNotFoundException {
@@ -95,6 +100,11 @@ public abstract class AbstractPartitionedRepositoryManager implements Repository
       IndexRepository oldRepository) throws IOException;
 
   protected IndexRepository computeRepository(Integer bucketId) {
+    try {
+      isDataRegionReady.await();
+    } catch (InterruptedException e) {
+      throw new InternalGemFireError("Uable to create index repository", e);
+    }
     IndexRepository repo = indexRepositories.compute(bucketId, (key, oldRepository) -> {
       try {
         if (closed) {
@@ -111,6 +121,10 @@ public abstract class AbstractPartitionedRepositoryManager implements Repository
     return repo;
   }
 
+  protected void allowRepositoryComputation() {
+    isDataRegionReady.countDown();
+  }
+
   /**
    * Return the repository for a given user bucket
    */

http://git-wip-us.apache.org/repos/asf/geode/blob/480a1e05/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneBucketListener.java
----------------------------------------------------------------------
diff --git a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneBucketListener.java b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneBucketListener.java
index 32fb3fc..37871aa 100644
--- a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneBucketListener.java
+++ b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneBucketListener.java
@@ -24,10 +24,10 @@ import org.apache.lucene.store.AlreadyClosedException;
 
 public class LuceneBucketListener extends PartitionListenerAdapter {
   private static final Logger logger = LogService.getLogger();
-  private PartitionedRepositoryManager lucenePartitionRepositoryManager;
+  private AbstractPartitionedRepositoryManager lucenePartitionRepositoryManager;
   private final DM dm;
 
-  public LuceneBucketListener(PartitionedRepositoryManager partitionedRepositoryManager,
+  public LuceneBucketListener(AbstractPartitionedRepositoryManager partitionedRepositoryManager,
       final DM dm) {
     lucenePartitionRepositoryManager = partitionedRepositoryManager;
     this.dm = dm;

http://git-wip-us.apache.org/repos/asf/geode/blob/480a1e05/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneEventListener.java
----------------------------------------------------------------------
diff --git a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneEventListener.java b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneEventListener.java
index c3fa2ff..bc4a7da 100644
--- a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneEventListener.java
+++ b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneEventListener.java
@@ -27,7 +27,6 @@ import org.apache.geode.internal.cache.wan.parallel.ParallelGatewaySenderQueue;
 import org.apache.logging.log4j.Logger;
 import org.apache.geode.cache.CacheClosedException;
 import org.apache.geode.InternalGemFireError;
-import org.apache.geode.cache.Operation;
 import org.apache.geode.cache.Region;
 import org.apache.geode.cache.RegionDestroyedException;
 import org.apache.geode.cache.asyncqueue.AsyncEvent;
@@ -36,10 +35,7 @@ import org.apache.geode.cache.lucene.internal.repository.RepositoryManager;
 import org.apache.geode.cache.lucene.internal.repository.IndexRepository;
 import org.apache.geode.cache.query.internal.DefaultQuery;
 import org.apache.geode.internal.cache.BucketNotFoundException;
-import org.apache.geode.internal.cache.CacheObserverHolder;
 import org.apache.geode.internal.cache.PrimaryBucketException;
-import org.apache.geode.internal.cache.partitioned.Bucket;
-import org.apache.geode.internal.cache.tier.sockets.CacheClientProxy.TestHook;
 import org.apache.geode.internal.logging.LogService;
 import org.apache.lucene.store.AlreadyClosedException;
 

http://git-wip-us.apache.org/repos/asf/geode/blob/480a1e05/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneIndexForPartitionedRegion.java
----------------------------------------------------------------------
diff --git a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneIndexForPartitionedRegion.java b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneIndexForPartitionedRegion.java
index c39a4a8..41505d7 100644
--- a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneIndexForPartitionedRegion.java
+++ b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneIndexForPartitionedRegion.java
@@ -57,6 +57,15 @@ public class LuceneIndexForPartitionedRegion extends LuceneIndexImpl {
   }
 
   protected RepositoryManager createRepositoryManager() {
+    HeterogeneousLuceneSerializer mapper = new HeterogeneousLuceneSerializer(getFieldNames());
+    PartitionedRepositoryManager partitionedRepositoryManager =
+        new PartitionedRepositoryManager(this, mapper);
+    return partitionedRepositoryManager;
+  }
+
+  protected void createLuceneListenersAndFileChunkRegions(
+      AbstractPartitionedRepositoryManager partitionedRepositoryManager) {
+    partitionedRepositoryManager.setUserRegionForRepositoryManager();
     RegionShortcut regionShortCut;
     final boolean withPersistence = withPersistence();
     RegionAttributes regionAttributes = dataRegion.getAttributes();
@@ -78,14 +87,6 @@ public class LuceneIndexForPartitionedRegion extends LuceneIndexImpl {
     // create PR fileAndChunkRegion, but not to create its buckets for now
     final String fileRegionName = createFileRegionName();
     PartitionAttributes partitionAttributes = dataRegion.getPartitionAttributes();
-
-
-    // create PR chunkRegion, but not to create its buckets for now
-
-    // we will create RegionDirectories on the fly when data comes in
-    HeterogeneousLuceneSerializer mapper = new HeterogeneousLuceneSerializer(getFieldNames());
-    PartitionedRepositoryManager partitionedRepositoryManager =
-        new PartitionedRepositoryManager(this, mapper);
     DM dm = this.cache.getInternalDistributedSystem().getDistributionManager();
     LuceneBucketListener lucenePrimaryBucketListener =
         new LuceneBucketListener(partitionedRepositoryManager, dm);
@@ -98,7 +99,6 @@ public class LuceneIndexForPartitionedRegion extends LuceneIndexImpl {
     fileSystemStats
         .setBytesSupplier(() -> getFileAndChunkRegion().getPrStats().getDataStoreBytesInUse());
 
-    return partitionedRepositoryManager;
   }
 
   public PartitionedRegion getFileAndChunkRegion() {

http://git-wip-us.apache.org/repos/asf/geode/blob/480a1e05/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneIndexImpl.java
----------------------------------------------------------------------
diff --git a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneIndexImpl.java b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneIndexImpl.java
index 36f6720..3393bcf 100644
--- a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneIndexImpl.java
+++ b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneIndexImpl.java
@@ -34,7 +34,6 @@ import org.apache.geode.cache.lucene.internal.xml.LuceneIndexCreation;
 import org.apache.geode.internal.cache.InternalCache;
 import org.apache.geode.internal.cache.InternalRegionArguments;
 import org.apache.geode.internal.cache.LocalRegion;
-import org.apache.geode.internal.cache.PartitionedRegion;
 import org.apache.geode.internal.cache.extension.Extension;
 import org.apache.geode.internal.i18n.LocalizedStrings;
 import org.apache.geode.internal.logging.LogService;
@@ -48,6 +47,7 @@ public abstract class LuceneIndexImpl implements InternalLuceneIndex {
   protected final LuceneIndexStats indexStats;
 
   protected boolean hasInitialized = false;
+  protected boolean hasInitializedAEQ = false;
   protected Map<String, Analyzer> fieldAnalyzers;
   protected String[] searchableFieldNames;
   protected RepositoryManager repositoryManager;
@@ -131,30 +131,41 @@ public abstract class LuceneIndexImpl implements InternalLuceneIndex {
     if (!hasInitialized) {
       /* create index region */
       dataRegion = getDataRegion();
-      // assert dataRegion != null;
-
-      repositoryManager = createRepositoryManager();
-
-      // create AEQ, AEQ listener and specify the listener to repositoryManager
-      createAEQ(dataRegion);
-
+      createLuceneListenersAndFileChunkRegions(
+          (AbstractPartitionedRepositoryManager) repositoryManager);
       addExtension(dataRegion);
       hasInitialized = true;
     }
   }
 
+  protected void initializeAEQ(RegionAttributes attributes, String aeqId) {
+    if (!hasInitializedAEQ) {
+      repositoryManager = createRepositoryManager();
+      createAEQ(attributes, aeqId);
+      hasInitializedAEQ = true;
+    }
+  }
+
   protected abstract RepositoryManager createRepositoryManager();
 
+  protected abstract void createLuceneListenersAndFileChunkRegions(
+      AbstractPartitionedRepositoryManager partitionedRepositoryManager);
+
   protected AsyncEventQueue createAEQ(Region dataRegion) {
-    return createAEQ(createAEQFactory(dataRegion));
+    String aeqId = LuceneServiceImpl.getUniqueIndexName(getName(), regionPath);
+    return createAEQ(createAEQFactory(dataRegion.getAttributes()), aeqId);
   }
 
-  private AsyncEventQueueFactoryImpl createAEQFactory(final Region dataRegion) {
+  protected AsyncEventQueue createAEQ(RegionAttributes attributes, String aeqId) {
+    return createAEQ(createAEQFactory(attributes), aeqId);
+  }
+
+  private AsyncEventQueueFactoryImpl createAEQFactory(final RegionAttributes attributes) {
     AsyncEventQueueFactoryImpl factory =
         (AsyncEventQueueFactoryImpl) cache.createAsyncEventQueueFactory();
-    if (dataRegion instanceof PartitionedRegion) {
-      PartitionedRegion pr = (PartitionedRegion) dataRegion;
-      if (pr.getPartitionAttributes().getLocalMaxMemory() == 0) {
+    if (attributes.getPartitionAttributes() != null) {
+
+      if (attributes.getPartitionAttributes().getLocalMaxMemory() == 0) {
         // accessor will not create AEQ
         return null;
       }
@@ -165,22 +176,21 @@ public abstract class LuceneIndexImpl implements InternalLuceneIndex {
     factory.setMaximumQueueMemory(1000);
     factory.setDispatcherThreads(10);
     factory.setIsMetaQueue(true);
-    if (dataRegion.getAttributes().getDataPolicy().withPersistence()) {
+    if (attributes.getDataPolicy().withPersistence()) {
       factory.setPersistent(true);
     }
-    factory.setDiskStoreName(dataRegion.getAttributes().getDiskStoreName());
+    factory.setDiskStoreName(attributes.getDiskStoreName());
     factory.setDiskSynchronous(true);
     factory.setForwardExpirationDestroy(true);
     return factory;
   }
 
-  private AsyncEventQueue createAEQ(AsyncEventQueueFactoryImpl factory) {
+  private AsyncEventQueue createAEQ(AsyncEventQueueFactoryImpl factory, String aeqId) {
     if (factory == null) {
       return null;
     }
     LuceneEventListener listener = new LuceneEventListener(repositoryManager);
     factory.setGatewayEventSubstitutionListener(new LuceneEventSubstitutionFilter());
-    String aeqId = LuceneServiceImpl.getUniqueIndexName(getName(), regionPath);
     AsyncEventQueue indexQueue = factory.create(aeqId, listener);
     return indexQueue;
   }

http://git-wip-us.apache.org/repos/asf/geode/blob/480a1e05/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneRawIndex.java
----------------------------------------------------------------------
diff --git a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneRawIndex.java b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneRawIndex.java
index 75ab5ca..ee2930d 100755
--- a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneRawIndex.java
+++ b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneRawIndex.java
@@ -27,7 +27,15 @@ public class LuceneRawIndex extends LuceneIndexImpl {
   @Override
   protected RepositoryManager createRepositoryManager() {
     HeterogeneousLuceneSerializer mapper = new HeterogeneousLuceneSerializer(getFieldNames());
-    return new RawLuceneRepositoryManager(this, mapper);
+    RawLuceneRepositoryManager rawLuceneRepositoryManager =
+        new RawLuceneRepositoryManager(this, mapper);
+    return rawLuceneRepositoryManager;
+  }
+
+  @Override
+  protected void createLuceneListenersAndFileChunkRegions(
+      AbstractPartitionedRepositoryManager partitionedRepositoryManager) {
+    partitionedRepositoryManager.setUserRegionForRepositoryManager();
   }
 
   @Override

http://git-wip-us.apache.org/repos/asf/geode/blob/480a1e05/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneRegionListener.java
----------------------------------------------------------------------
diff --git a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneRegionListener.java b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneRegionListener.java
index f4e2a79..48462a0 100644
--- a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneRegionListener.java
+++ b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneRegionListener.java
@@ -23,6 +23,7 @@ import org.apache.geode.cache.EvictionAlgorithm;
 import org.apache.geode.cache.EvictionAttributes;
 import org.apache.geode.cache.Region;
 import org.apache.geode.cache.RegionAttributes;
+import org.apache.geode.cache.asyncqueue.internal.AsyncEventQueueImpl;
 import org.apache.geode.internal.cache.InternalCache;
 import org.apache.geode.internal.cache.InternalRegionArguments;
 import org.apache.geode.internal.cache.RegionListener;
@@ -43,6 +44,8 @@ public class LuceneRegionListener implements RegionListener {
 
   private final String[] fields;
 
+  private LuceneIndexImpl luceneIndex;
+
   public LuceneRegionListener(LuceneServiceImpl service, InternalCache cache, String indexName,
       String regionPath, String[] fields, Analyzer analyzer, Map<String, Analyzer> fieldAnalyzers) {
     this.service = service;
@@ -97,6 +100,9 @@ public class LuceneRegionListener implements RegionListener {
       internalRegionArgs.addCacheServiceProfile(new LuceneIndexCreationProfile(this.indexName,
           this.regionPath, this.fields, this.analyzer, this.fieldAnalyzers));
 
+      luceneIndex = this.service.beforeDataRegionCreated(this.indexName, this.regionPath, attrs,
+          this.analyzer, this.fieldAnalyzers, aeqId, this.fields);
+
       // Add internal async event id
       internalRegionArgs.addInternalAsyncEventQueueId(aeqId);
     }
@@ -106,8 +112,12 @@ public class LuceneRegionListener implements RegionListener {
   @Override
   public void afterCreate(Region region) {
     if (region.getFullPath().equals(this.regionPath)) {
-      this.service.afterDataRegionCreated(this.indexName, this.analyzer, this.regionPath,
-          this.fieldAnalyzers, this.fields);
+      this.service.afterDataRegionCreated(this.luceneIndex);
+      String aeqId = LuceneServiceImpl.getUniqueIndexName(this.indexName, this.regionPath);
+      AsyncEventQueueImpl aeq = (AsyncEventQueueImpl) cache.getAsyncEventQueue(aeqId);
+      AbstractPartitionedRepositoryManager repositoryManager =
+          (AbstractPartitionedRepositoryManager) luceneIndex.getRepositoryManager();
+      repositoryManager.allowRepositoryComputation();
       this.cache.removeRegionListener(this);
     }
   }

http://git-wip-us.apache.org/repos/asf/geode/blob/480a1e05/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneServiceImpl.java
----------------------------------------------------------------------
diff --git a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneServiceImpl.java b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneServiceImpl.java
index 437a552..ebee59e 100644
--- a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneServiceImpl.java
+++ b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneServiceImpl.java
@@ -31,10 +31,7 @@ import org.apache.lucene.analysis.Analyzer;
 import org.apache.lucene.analysis.miscellaneous.PerFieldAnalyzerWrapper;
 import org.apache.lucene.analysis.standard.StandardAnalyzer;
 
-import org.apache.geode.cache.AttributesFactory;
 import org.apache.geode.cache.Cache;
-import org.apache.geode.cache.EvictionAlgorithm;
-import org.apache.geode.cache.EvictionAttributes;
 import org.apache.geode.cache.Region;
 import org.apache.geode.cache.RegionAttributes;
 import org.apache.geode.cache.execute.Execution;
@@ -57,7 +54,6 @@ import org.apache.geode.internal.DSFIDFactory;
 import org.apache.geode.internal.DataSerializableFixedID;
 import org.apache.geode.internal.cache.extension.Extensible;
 import org.apache.geode.internal.cache.CacheService;
-import org.apache.geode.internal.cache.InternalRegionArguments;
 import org.apache.geode.internal.cache.RegionListener;
 import org.apache.geode.internal.cache.xmlcache.XmlGenerator;
 import org.apache.geode.internal.i18n.LocalizedStrings;
@@ -167,28 +163,28 @@ public class LuceneServiceImpl implements InternalLuceneService {
    * 
    * Public because this is called by the Xml parsing code
    */
-  public void afterDataRegionCreated(final String indexName, final Analyzer analyzer,
-      final String dataRegionPath, final Map<String, Analyzer> fieldAnalyzers,
-      final String... fields) {
-    LuceneIndexImpl index = createIndexRegions(indexName, dataRegionPath);
-    index.setSearchableFields(fields);
-    index.setAnalyzer(analyzer);
-    index.setFieldAnalyzers(fieldAnalyzers);
+  public void afterDataRegionCreated(LuceneIndexImpl index) {
     index.initialize();
     registerIndex(index);
     if (this.managementListener != null) {
       this.managementListener.afterIndexCreated(index);
     }
+
   }
 
-  private LuceneIndexImpl createIndexRegions(String indexName, String regionPath) {
-    Region dataregion = this.cache.getRegion(regionPath);
-    if (dataregion == null) {
-      logger.info("Data region " + regionPath + " not found");
-      return null;
-    }
-    // Convert the region name into a canonical form
-    regionPath = dataregion.getFullPath();
+  public LuceneIndexImpl beforeDataRegionCreated(final String indexName, final String regionPath,
+      RegionAttributes attributes, final Analyzer analyzer,
+      final Map<String, Analyzer> fieldAnalyzers, String aeqId, final String... fields) {
+    LuceneIndexImpl index = createIndexObject(indexName, regionPath);
+    index.setSearchableFields(fields);
+    index.setAnalyzer(analyzer);
+    index.setFieldAnalyzers(fieldAnalyzers);
+    index.initializeAEQ(attributes, aeqId);
+    return index;
+
+  }
+
+  private LuceneIndexImpl createIndexObject(String indexName, String regionPath) {
     return luceneIndexFactory.create(indexName, regionPath, cache);
   }
 

http://git-wip-us.apache.org/repos/asf/geode/blob/480a1e05/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/LuceneEventListenerJUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/LuceneEventListenerJUnitTest.java b/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/LuceneEventListenerJUnitTest.java
index 801f6b6..88057e5 100644
--- a/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/LuceneEventListenerJUnitTest.java
+++ b/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/LuceneEventListenerJUnitTest.java
@@ -67,9 +67,7 @@ public class LuceneEventListenerJUnitTest {
 
     Mockito.when(manager.getRepository(eq(region1), any(), eq(callback1))).thenReturn(repo1);
     Mockito.when(manager.getRepository(eq(region2), any(), eq(null))).thenReturn(repo2);
-
     LuceneEventListener listener = new LuceneEventListener(manager);
-
     List<AsyncEvent> events = new ArrayList<AsyncEvent>();
 
     int numEntries = 100;
@@ -115,7 +113,6 @@ public class LuceneEventListenerJUnitTest {
     Logger log = Mockito.mock(Logger.class);
     Mockito.when(manager.getRepository(any(), any(), any()))
         .thenThrow(BucketNotFoundException.class);
-
     LuceneEventListener listener = new LuceneEventListener(manager);
     listener.logger = log;
     AsyncEvent event = Mockito.mock(AsyncEvent.class);
@@ -128,7 +125,6 @@ public class LuceneEventListenerJUnitTest {
   public void shouldThrowAndCaptureIOException() throws BucketNotFoundException {
     RepositoryManager manager = Mockito.mock(RepositoryManager.class);
     Mockito.when(manager.getRepository(any(), any(), any())).thenThrow(IOException.class);
-
     AtomicReference<Throwable> lastException = new AtomicReference<>();
     LuceneEventListener.setExceptionObserver(lastException::set);
     LuceneEventListener listener = new LuceneEventListener(manager);

http://git-wip-us.apache.org/repos/asf/geode/blob/480a1e05/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/LuceneIndexFactorySpy.java
----------------------------------------------------------------------
diff --git a/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/LuceneIndexFactorySpy.java b/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/LuceneIndexFactorySpy.java
index 8b379a5..1a092d7 100644
--- a/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/LuceneIndexFactorySpy.java
+++ b/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/LuceneIndexFactorySpy.java
@@ -15,15 +15,11 @@
 package org.apache.geode.cache.lucene.internal;
 
 import static org.mockito.Matchers.any;
-import static org.mockito.Mockito.*;
 
 import java.util.function.Consumer;
 
 import org.mockito.Mockito;
-import org.mockito.stubbing.Answer;
 
-import org.apache.geode.cache.lucene.internal.repository.RepositoryManager;
-import org.apache.geode.internal.cache.BucketNotFoundException;
 import org.apache.geode.internal.cache.InternalCache;
 
 public class LuceneIndexFactorySpy extends LuceneIndexImplFactory {
@@ -59,19 +55,5 @@ public class LuceneIndexFactorySpy extends LuceneIndexImplFactory {
       super(indexName, regionPath, cache);
     }
 
-    @Override
-    public RepositoryManager createRepositoryManager() {
-      RepositoryManager repositoryManagerSpy = Mockito.spy(super.createRepositoryManager());
-      Answer getRepositoryAnswer = invocation -> {
-        getRepositoryConsumer.accept(invocation.getArgumentAt(0, Object.class));
-        return invocation.callRealMethod();
-      };
-      try {
-        doAnswer(getRepositoryAnswer).when(repositoryManagerSpy).getRepositories(any());
-      } catch (BucketNotFoundException e) {
-        e.printStackTrace();
-      }
-      return repositoryManagerSpy;
-    }
   }
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/480a1e05/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/LuceneIndexForPartitionedRegionTest.java
----------------------------------------------------------------------
diff --git a/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/LuceneIndexForPartitionedRegionTest.java b/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/LuceneIndexForPartitionedRegionTest.java
index 8e4c179..b2fdd84 100644
--- a/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/LuceneIndexForPartitionedRegionTest.java
+++ b/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/LuceneIndexForPartitionedRegionTest.java
@@ -194,7 +194,7 @@ public class LuceneIndexForPartitionedRegionTest {
     Region region = initializeScenario(withPersistence, regionPath, cache, 0);
     LuceneIndexForPartitionedRegion index =
         new LuceneIndexForPartitionedRegion(name, regionPath, cache);
-    LuceneIndexForPartitionedRegion spy = setupSpy(region, index);
+    LuceneIndexForPartitionedRegion spy = setupSpy(region, index, "aeq");
     spy.initialize();
   }
 
@@ -208,17 +208,18 @@ public class LuceneIndexForPartitionedRegionTest {
 
     LuceneIndexForPartitionedRegion index =
         new LuceneIndexForPartitionedRegion(name, regionPath, cache);
-    LuceneIndexForPartitionedRegion spy = setupSpy(region, index);
+    LuceneIndexForPartitionedRegion spy = setupSpy(region, index, "aeq");
 
-    verify(spy).createAEQ(eq(region));
+    verify(spy).createAEQ(eq(region.getAttributes()), eq("aeq"));
   }
 
   protected LuceneIndexForPartitionedRegion setupSpy(final Region region,
-      final LuceneIndexForPartitionedRegion index) {
+      final LuceneIndexForPartitionedRegion index, final String aeq) {
     index.setSearchableFields(new String[] {"field"});
     LuceneIndexForPartitionedRegion spy = spy(index);
     doReturn(null).when(spy).createFileRegion(any(), any(), any(), any(), any());
-    doReturn(null).when(spy).createAEQ(eq(region));
+    doReturn(null).when(spy).createAEQ(any(), any());
+    spy.initializeAEQ(region.getAttributes(), aeq);
     spy.initialize();
     return spy;
   }
@@ -233,7 +234,7 @@ public class LuceneIndexForPartitionedRegionTest {
 
     LuceneIndexForPartitionedRegion index =
         new LuceneIndexForPartitionedRegion(name, regionPath, cache);
-    LuceneIndexForPartitionedRegion spy = setupSpy(region, index);
+    LuceneIndexForPartitionedRegion spy = setupSpy(region, index, "aeq");
 
     verify(spy).createFileRegion(eq(RegionShortcut.PARTITION), eq(index.createFileRegionName()),
         any(), any(), any());
@@ -272,7 +273,8 @@ public class LuceneIndexForPartitionedRegionTest {
     index.setSearchableFields(new String[] {"field"});
     LuceneIndexForPartitionedRegion spy = spy(index);
     doReturn(null).when(spy).createFileRegion(any(), any(), any(), any(), any());
-    doReturn(null).when(spy).createAEQ(any());
+    doReturn(null).when(spy).createAEQ((RegionAttributes) any(), any());
+    spy.initializeAEQ(any(), any());
     spy.initialize();
 
     verify(spy).createFileRegion(eq(RegionShortcut.PARTITION_PERSISTENT),
@@ -292,7 +294,8 @@ public class LuceneIndexForPartitionedRegionTest {
     index.setSearchableFields(new String[] {"field"});
     LuceneIndexForPartitionedRegion spy = spy(index);
     doReturn(null).when(spy).createFileRegion(any(), any(), any(), any(), any());
-    doReturn(null).when(spy).createAEQ(any());
+    doReturn(null).when(spy).createAEQ(any(), any());
+    spy.initializeAEQ(any(), any());
     spy.initialize();
     spy.initialize();
 
@@ -316,7 +319,8 @@ public class LuceneIndexForPartitionedRegionTest {
         new LuceneIndexForPartitionedRegion(name, regionPath, cache);
     index = spy(index);
     when(index.getFieldNames()).thenReturn(fields);
-    doReturn(aeq).when(index).createAEQ(any());
+    doReturn(aeq).when(index).createAEQ(any(), any());
+    index.initializeAEQ(cache.getRegionAttributes(regionPath), aeq.getId());
     index.initialize();
     PartitionedRegion region = (PartitionedRegion) cache.getRegion(regionPath);
     ResultCollector collector = mock(ResultCollector.class);

http://git-wip-us.apache.org/repos/asf/geode/blob/480a1e05/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/PartitionedRepositoryManagerJUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/PartitionedRepositoryManagerJUnitTest.java b/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/PartitionedRepositoryManagerJUnitTest.java
index 87317cc..30e64f2 100644
--- a/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/PartitionedRepositoryManagerJUnitTest.java
+++ b/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/PartitionedRepositoryManagerJUnitTest.java
@@ -113,6 +113,8 @@ public class PartitionedRepositoryManagerJUnitTest {
     when(indexForPR.getCache()).thenReturn(cache);
     when(indexForPR.getRegionPath()).thenReturn("/testRegion");
     repoManager = new PartitionedRepositoryManager(indexForPR, serializer);
+    repoManager.setUserRegionForRepositoryManager();
+    repoManager.allowRepositoryComputation();
   }
 
   @Test

http://git-wip-us.apache.org/repos/asf/geode/blob/480a1e05/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/RawLuceneRepositoryManagerJUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/RawLuceneRepositoryManagerJUnitTest.java b/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/RawLuceneRepositoryManagerJUnitTest.java
index bca7085..df31bb9 100755
--- a/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/RawLuceneRepositoryManagerJUnitTest.java
+++ b/geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/RawLuceneRepositoryManagerJUnitTest.java
@@ -78,6 +78,8 @@ public class RawLuceneRepositoryManagerJUnitTest extends PartitionedRepositoryMa
     when(indexForPR.getRegionPath()).thenReturn("/testRegion");
     when(indexForPR.withPersistence()).thenReturn(true);
     repoManager = new RawLuceneRepositoryManager(indexForPR, serializer);
+    repoManager.setUserRegionForRepositoryManager();
+    repoManager.allowRepositoryComputation();
   }
 
   @Test


[03/10] geode git commit: GEODE-2778: gfsh help now uses log4j log levels: update user guide This closes #484

Posted by zh...@apache.org.
GEODE-2778: gfsh help now uses log4j log levels: update user guide
This closes #484


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/b81ebcb1
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/b81ebcb1
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/b81ebcb1

Branch: refs/heads/feature/GEM-1353
Commit: b81ebcb194d3b31bcb6afcc4ce2addd6ba049f58
Parents: dc6b600
Author: Dave Barnes <db...@pivotal.io>
Authored: Fri Apr 28 15:55:50 2017 -0700
Committer: Dave Barnes <db...@pivotal.io>
Committed: Tue May 2 14:40:36 2017 -0700

----------------------------------------------------------------------
 .../gfsh/command-pages/alter.html.md.erb        | 22 ++++++--------------
 .../gfsh/command-pages/change.html.md.erb       | 18 ++++------------
 .../gfsh/command-pages/export.html.md.erb       |  2 +-
 .../gfsh/command-pages/start.html.md.erb        |  4 ++--
 .../gfsh/configuring_gfsh.html.md.erb           |  2 +-
 5 files changed, 14 insertions(+), 34 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/b81ebcb1/geode-docs/tools_modules/gfsh/command-pages/alter.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gfsh/command-pages/alter.html.md.erb b/geode-docs/tools_modules/gfsh/command-pages/alter.html.md.erb
index 4601975..f28c1ce 100644
--- a/geode-docs/tools_modules/gfsh/command-pages/alter.html.md.erb
+++ b/geode-docs/tools_modules/gfsh/command-pages/alter.html.md.erb
@@ -445,20 +445,10 @@ alter runtime [--member=value] [--group=value] [--archive-disk-space-limit=value
 <td>0</td>
 </tr>
 <tr class="odd">
-<td><span class="keyword parmname">\-\-log-level </span></td>
-<td>Log level. Valid values are:
-<ul>
-<li><code class="ph codeph">none</code></li>
-<li><code class="ph codeph">error</code></li>
-<li><code class="ph codeph">info</code></li>
-<li><code class="ph codeph">config</code></li>
-<li><code class="ph codeph">warning</code></li>
-<li><code class="ph codeph">severe</code></li>
-<li><code class="ph codeph">fine</code></li>
-<li><code class="ph codeph">finer</code></li>
-<li><code class="ph codeph">finest</code></li>
-</ul></td>
-<td>config</td>
+<td><span class="keyword parmname">\-\-loglevel </span></td>
+<td>The new log level. This option is required and you must specify a value.
+Valid values are: `ALL`, `TRACE`, `DEBUG`, `INFO`, `WARN`, `ERROR`, `FATAL`, `OFF`. 
+<td>INFO</td>
 </tr>
 <tr class="even">
 <td><span class="keyword parmname">\-\-statistic-archive-file </span></td>
@@ -508,13 +498,13 @@ alter runtime [--member=value] [--group=value] [--archive-disk-space-limit=value
 **Example Commands:**
 
 ``` pre
-alter runtime --member=server1 --log-level=finest --enable-statistics=true
+alter runtime --member=server1 --loglevel=WARN --enable-statistics=true
 ```
 
 **Sample Output:**
 
 ``` pre
-gfsh>alter runtime --member=server1 --log-level=finest --enable-statistics=true
+gfsh>alter runtime --member=server1 --loglevel=WARN --enable-statistics=true
 Runtime configuration altered successfully for the following member(s)
 192.0.2.0(server1:240)<v1>:64871
 ```

http://git-wip-us.apache.org/repos/asf/geode/blob/b81ebcb1/geode-docs/tools_modules/gfsh/command-pages/change.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gfsh/command-pages/change.html.md.erb b/geode-docs/tools_modules/gfsh/command-pages/change.html.md.erb
index 1ea2e0d..8ff5e9b 100644
--- a/geode-docs/tools_modules/gfsh/command-pages/change.html.md.erb
+++ b/geode-docs/tools_modules/gfsh/command-pages/change.html.md.erb
@@ -60,18 +60,8 @@ change loglevel --loglevel=value [--members=value(nullvalue)*] [--groups=value(n
 <tr class="odd">
 <td><span class="keyword parmname">\-\-loglevel</span></td>
 <td><em>Required.</em> Log level to change. Valid options are:
-<ul>
-<li><code class="ph codeph">all</code></li>
-<li><code class="ph codeph">finest</code></li>
-<li><code class="ph codeph">finer</code></li>
-<li><code class="ph codeph">fine</code></li>
-<li><code class="ph codeph">config</code></li>
-<li><code class="ph codeph">info</code></li>
-<li><code class="ph codeph">warning</code></li>
-<li><code class="ph codeph">error</code></li>
-<li><code class="ph codeph">severe</code></li>
-<li><code class="ph codeph">none</code></li>
-</ul></td>
+ `ALL`, `TRACE`, `DEBUG`, `INFO`, `WARN`, `ERROR`, `FATAL`, `OFF`. 
+</td>
 <td> </td>
 </tr>
 </tbody>
@@ -82,13 +72,13 @@ change loglevel --loglevel=value [--members=value(nullvalue)*] [--groups=value(n
 **Example Commands:**
 
 ``` pre
-gfsh>change loglevel --loglevel=severe --members=server1
+gfsh>change loglevel --loglevel=DEBUG --members=server1
 ```
 
 **Sample Output:**
 
 ``` pre
-gfsh>change loglevel --loglevel=severe --members=server1
+gfsh>change loglevel --loglevel=DEBUG --members=server1
 
 Summary
 

http://git-wip-us.apache.org/repos/asf/geode/blob/b81ebcb1/geode-docs/tools_modules/gfsh/command-pages/export.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gfsh/command-pages/export.html.md.erb b/geode-docs/tools_modules/gfsh/command-pages/export.html.md.erb
index d8dbb1b..ccf95cc 100644
--- a/geode-docs/tools_modules/gfsh/command-pages/export.html.md.erb
+++ b/geode-docs/tools_modules/gfsh/command-pages/export.html.md.erb
@@ -178,7 +178,7 @@ export logs [--dir=value] [--group=value(,value)*] [--member=value(,value)*]
 | <span class="keyword parmname">\\-\\-dir</span>            | Local directory to which log files will be written when logs are exported using an http connection. Ignored when the command is executed over JMX.     |               | 
 | <span class="keyword parmname">\\-\\-group</span>          | Group of members whose log files will be exported.                                                                         |               |
 | <span class="keyword parmname">&#8209;&#8209;member</span>         | Name/ID of the member whose log files will be exported.                                                                    |               |
-| <span class="keyword parmname">\\-\\-log-level</span>      | Minimum level of log entries to export. Valid values are: `fatal`, `error`, `warn`, `info`, `debug`, `trace`, and `all`. | `info`        |
+| <span class="keyword parmname">\\-\\-log-level</span>      | Minimum level of log entries to export. Valid values are: `OFF`, `FATAL`, `ERROR`, `WARN`, `INFO`, `DEBUG`, `TRACE`, and `ALL`. | `INFO`        |
 | <span class="keyword parmname">\\-\\-only-log-level</span> | Whether to only include only entries that exactly match the <span class="keyword parmname">\\-\\-log-level</span> specified.  | false         |
 | <span class="keyword parmname">&#8209;&#8209;merge&#8209;log</span>      | Whether to merge logs after exporting to the target directory (deprecated).                                                             | false         |
 | <span class="keyword parmname">\\-\\-start-time</span>     | Log entries that occurred after this time will be exported. Format: yyyy/MM/dd/HH/mm/ss/SSS/z OR yyyy/MM/dd                | no limit      |

http://git-wip-us.apache.org/repos/asf/geode/blob/b81ebcb1/geode-docs/tools_modules/gfsh/command-pages/start.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gfsh/command-pages/start.html.md.erb b/geode-docs/tools_modules/gfsh/command-pages/start.html.md.erb
index 4cdab79..e2a4edc 100644
--- a/geode-docs/tools_modules/gfsh/command-pages/start.html.md.erb
+++ b/geode-docs/tools_modules/gfsh/command-pages/start.html.md.erb
@@ -329,7 +329,7 @@ start locator --name=value [--bind-address=value] [--force(=value)] [--group=val
 </tr>
 <tr class="odd">
 <td><span class="keyword parmname">\-\-log-level</span></td>
-<td>Level of output logged to the locator log file. Possible values for log-level include: finest, finer, fine, config, info, warning, severe, none.</td>
+<td>Level of output logged to the locator log file. Possible values for log-level include: ALL, TRACE, DEBUG, INFO, WARN, ERROR, FATAL, OFF.</td>
 <td> </td>
 </tr>
 <tr class="even">
@@ -611,7 +611,7 @@ start server --name=value [--assign-buckets(=value)] [--bind-address=value]
 </tr>
 <tr class="even">
 <td><span class="keyword parmname">\-\-log-level</span></td>
-<td>Sets the level of output logged to the Cache Server log file. Possible values for log-level include: finest, finer, fine, config, info, warning, severe, none.</td>
+<td>Sets the level of output logged to the Cache Server log file. Possible values for log-level include: ALL, TRACE, DEBUG, INFO, WARN, ERROR, FATAL, OFF.</td>
 <td> </td>
 </tr>
 <tr class="odd">

http://git-wip-us.apache.org/repos/asf/geode/blob/b81ebcb1/geode-docs/tools_modules/gfsh/configuring_gfsh.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gfsh/configuring_gfsh.html.md.erb b/geode-docs/tools_modules/gfsh/configuring_gfsh.html.md.erb
index 3c8d3a7..b112e66 100644
--- a/geode-docs/tools_modules/gfsh/configuring_gfsh.html.md.erb
+++ b/geode-docs/tools_modules/gfsh/configuring_gfsh.html.md.erb
@@ -58,7 +58,7 @@ See [Useful gfsh Shell Variables](useful_gfsh_shell_variables.html#concept_731EC
 
 ## <a id="concept_3B9C6CE2F64841E98C33D9F6441DF487__section_BE7FB8B355E748FA8BEFE75B2C3CB86E" class="no-quick-link"></a>Configuring gfsh Session Logging
 
-By default, `gfsh` session logging is disabled. To enable gfsh logging, you must set the Java system property `-Dgfsh.log-level=desired_log_level                 `where *desired\_log \_level* is one of the following values: severe, warning, info, config, fine, finer, finest. For example, in Linux:
+By default, `gfsh` session logging is disabled. To enable gfsh logging, you must set the Java system property `-Dgfsh.log-level=desired_log_level` where *desired\_log \_level* is one of the following values: severe, warning, info, config, fine, finer, finest. For example, in Linux:
 
 ``` pre
 export JAVA_ARGS=-Dgfsh.log-level=info


[10/10] geode git commit: geode-2848: when found all the partitioned region is destroyed, add back the events

Posted by zh...@apache.org.
geode-2848: when found all the partitioned region is destroyed, add back the
events


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/d4ece31f
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/d4ece31f
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/d4ece31f

Branch: refs/heads/feature/GEM-1353
Commit: d4ece31fa23bbe74c8be0a82ff4b9d143bad79b3
Parents: 1683265
Author: zhouxh <gz...@pivotal.io>
Authored: Mon May 1 14:41:22 2017 -0700
Committer: zhouxh <gz...@pivotal.io>
Committed: Wed May 3 13:52:34 2017 -0700

----------------------------------------------------------------------
 .../cache/wan/parallel/ParallelGatewaySenderQueue.java       | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/d4ece31f/geode-core/src/main/java/org/apache/geode/internal/cache/wan/parallel/ParallelGatewaySenderQueue.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/wan/parallel/ParallelGatewaySenderQueue.java b/geode-core/src/main/java/org/apache/geode/internal/cache/wan/parallel/ParallelGatewaySenderQueue.java
index 9696b90..82e6f68 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/wan/parallel/ParallelGatewaySenderQueue.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/wan/parallel/ParallelGatewaySenderQueue.java
@@ -1724,6 +1724,8 @@ public class ParallelGatewaySenderQueue implements RegionQueue {
               ParallelQueueRemovalMessage pqrm = new ParallelQueueRemovalMessage(temp);
               pqrm.setRecipients(recipients);
               dm.putOutgoing(pqrm);
+            } else {
+              regionToDispatchedKeysMap.putAll(temp);
             }
 
           } // be somewhat tolerant of failures
@@ -1773,8 +1775,10 @@ public class ParallelGatewaySenderQueue implements RegionQueue {
     private Set<InternalDistributedMember> getAllRecipients(GemFireCacheImpl cache, Map map) {
       Set recipients = new ObjectOpenHashSet();
       for (Object pr : map.keySet()) {
-        recipients.addAll(((PartitionedRegion) (cache.getRegion((String) pr))).getRegionAdvisor()
-            .adviseDataStore());
+        PartitionedRegion partitionedRegion = (PartitionedRegion) cache.getRegion((String) pr);
+        if (partitionedRegion != null && partitionedRegion.getRegionAdvisor() != null) {
+          recipients.addAll(partitionedRegion.getRegionAdvisor().adviseDataStore());
+        }
       }
       return recipients;
     }


[06/10] geode git commit: Merge branch 'feature/GEODE-2843' into develop

Posted by zh...@apache.org.
Merge branch 'feature/GEODE-2843' into develop

* feature/GEODE-2843:
  GEODE-2843 User Guide - example should specify <client-cache>


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/a6832ee2
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/a6832ee2
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/a6832ee2

Branch: refs/heads/feature/GEM-1353
Commit: a6832ee2c4878ab5d8631c91fdf40e28098bde4d
Parents: 0cb9e7a dd246bd
Author: Dave Barnes <db...@pivotal.io>
Authored: Tue May 2 17:09:48 2017 -0700
Committer: Dave Barnes <db...@pivotal.io>
Committed: Tue May 2 17:09:48 2017 -0700

----------------------------------------------------------------------
 .../reference/topics/client-cache.html.md.erb   | 34 +++++++++++---------
 1 file changed, 18 insertions(+), 16 deletions(-)
----------------------------------------------------------------------



[09/10] geode git commit: GEODE-2847: Get correct version tags for retried bulk operation

Posted by zh...@apache.org.
GEODE-2847: Get correct version tags for retried bulk operation


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/16832655
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/16832655
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/16832655

Branch: refs/heads/feature/GEM-1353
Commit: 16832655d18592ddaf3c89979be30e5e7caa10f1
Parents: 0dd2552
Author: eshu <es...@pivotal.io>
Authored: Wed May 3 10:14:08 2017 -0700
Committer: eshu <es...@pivotal.io>
Committed: Wed May 3 10:14:08 2017 -0700

----------------------------------------------------------------------
 .../geode/internal/cache/EventTracker.java      | 117 +++++++------------
 .../geode/internal/cache/LocalRegion.java       |  24 ++--
 .../cache/partitioned/PutAllPRMessage.java      |   2 +-
 .../cache/partitioned/RemoveAllPRMessage.java   |   2 +-
 .../tier/sockets/ClientProxyMembershipID.java   |   2 +-
 .../AbstractDistributedRegionJUnitTest.java     |  15 ++-
 .../cache/DistributedRegionJUnitTest.java       |  54 ++++++++-
 7 files changed, 120 insertions(+), 96 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/16832655/geode-core/src/main/java/org/apache/geode/internal/cache/EventTracker.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/EventTracker.java b/geode-core/src/main/java/org/apache/geode/internal/cache/EventTracker.java
index 2ddfdc4..278367c 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/EventTracker.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/EventTracker.java
@@ -59,14 +59,12 @@ public class EventTracker {
       new ConcurrentHashMap<ThreadIdentifier, EventSeqnoHolder>(100);
 
   /**
-   * a mapping of originator to bulkOp's last status (true means finished processing) applied to
-   * this cache.
+   * a mapping of originator to bulkOps
    *
-   * Keys are instances of @link {@link ThreadIdentifier}, values are instances of
-   * {@link BulkOpProcessed}.
+   * Keys are instances of @link {@link ThreadIdentifier}
    */
-  private final ConcurrentMap<ThreadIdentifier, BulkOpProcessed> recordedBulkOps =
-      new ConcurrentHashMap<ThreadIdentifier, BulkOpProcessed>(100);
+  private final ConcurrentMap<ThreadIdentifier, Object> recordedBulkOps =
+      new ConcurrentHashMap<ThreadIdentifier, Object>(100);
 
   /**
    * a mapping of originator to bulkOperation's last version tags. This map differs from
@@ -141,7 +139,7 @@ public class EventTracker {
   public EventTracker(LocalRegion region) {
     this.cache = region.cache;
     this.name = "Event Tracker for " + region.getName();
-    this.initializationLatch = new StoppableCountDownLatch(region.stopper, 1);
+    this.initializationLatch = new StoppableCountDownLatch(region.getStopper(), 1);
   }
 
   /** start this event tracker */
@@ -307,19 +305,22 @@ public class EventTracker {
       }
     }
 
+    EventSeqnoHolder newEvh = new EventSeqnoHolder(eventID.getSequenceID(), tag);
+    if (logger.isTraceEnabled()) {
+      logger.trace("region event tracker recording {}", event);
+    }
+    recordSeqno(membershipID, newEvh);
+
     // If this is a bulkOp, and concurrency checks are enabled, we need to
     // save the version tag in case we retry.
-    if (lr.concurrencyChecksEnabled
+    // Make recordBulkOp version tag after recordSeqno, so that recordBulkOpStart
+    // in a retry bulk op would not incorrectly remove the saved version tag in
+    // recordedBulkOpVersionTags
+    if (lr.getConcurrencyChecksEnabled()
         && (event.getOperation().isPutAll() || event.getOperation().isRemoveAll())
         && lr.getServerProxy() == null) {
       recordBulkOpEvent(event, membershipID);
     }
-
-    EventSeqnoHolder newEvh = new EventSeqnoHolder(eventID.getSequenceID(), tag);
-    if (logger.isTraceEnabled()) {
-      logger.trace("region event tracker recording {}", event);
-    }
-    recordSeqno(membershipID, newEvh);
   }
 
   /**
@@ -542,24 +543,19 @@ public class EventTracker {
     ThreadIdentifier membershipID =
         new ThreadIdentifier(eventID.getMembershipID(), eventID.getThreadID());
 
-    BulkOpProcessed opSyncObj =
-        recordedBulkOps.putIfAbsent(membershipID, new BulkOpProcessed(false));
-    if (opSyncObj == null) {
-      opSyncObj = recordedBulkOps.get(membershipID);
-    }
+    Object opSyncObj = null;
+    do {
+      opSyncObj = recordedBulkOps.putIfAbsent(membershipID, new Object());
+      if (opSyncObj == null) {
+        opSyncObj = recordedBulkOps.get(membershipID);
+      }
+    } while (opSyncObj == null);
+
     synchronized (opSyncObj) {
       try {
-        if (opSyncObj.getStatus() && logger.isDebugEnabled()) {
-          logger.debug("SyncBulkOp: The operation was performed by another thread.");
-        } else {
-          recordBulkOpStart(membershipID);
-
-          // Perform the bulk op
-          r.run();
-          // set to true in case another thread is waiting at sync
-          opSyncObj.setStatus(true);
-          recordedBulkOps.remove(membershipID);
-        }
+        recordBulkOpStart(membershipID, eventID);
+        // Perform the bulk op
+        r.run();
       } finally {
         recordedBulkOps.remove(membershipID);
       }
@@ -567,14 +563,23 @@ public class EventTracker {
   }
 
   /**
-   * Called when a bulkOp is started on the local region. Used to clear event tracker state from the
-   * last bulkOp.
+   * Called when a new bulkOp is started on the local region. Used to clear event tracker state from
+   * the last bulkOp.
    */
-  public void recordBulkOpStart(ThreadIdentifier tid) {
+  public void recordBulkOpStart(ThreadIdentifier tid, EventID eventID) {
     if (logger.isDebugEnabled()) {
       logger.debug("recording bulkOp start for {}", tid.expensiveToString());
     }
-    this.recordedBulkOpVersionTags.remove(tid);
+    EventSeqnoHolder evh = recordedEvents.get(tid);
+    if (evh == null) {
+      return;
+    }
+    synchronized (evh) {
+      // only remove it when a new bulk op occurs
+      if (eventID.getSequenceID() > evh.lastSeqno) {
+        this.recordedBulkOpVersionTags.remove(tid);
+      }
+    }
   }
 
   /**
@@ -660,50 +665,6 @@ public class EventTracker {
   }
 
   /**
-   * A status tracker for each bulk operation (putAll or removeAll) from originators specified by
-   * membershipID and threadID in the cache processed is true means the bulk op is processed by one
-   * thread no need to redo it by other threads.
-   * 
-   * @since GemFire 5.7
-   */
-  static class BulkOpProcessed {
-    /** whether the op is processed */
-    private boolean processed;
-
-    /**
-     * creates a new instance to save status of a bulk op
-     * 
-     * @param status true if the op has been processed
-     */
-    BulkOpProcessed(boolean status) {
-      this.processed = status;
-    }
-
-    /**
-     * setter method to change the status
-     * 
-     * @param status true if the op has been processed
-     */
-    void setStatus(boolean status) {
-      this.processed = status;
-    }
-
-    /**
-     * getter method to peek the current status
-     * 
-     * @return current status
-     */
-    boolean getStatus() {
-      return this.processed;
-    }
-
-    @Override
-    public String toString() {
-      return "BULKOP(" + this.processed + ")";
-    }
-  }
-
-  /**
    * A holder for the version tags generated for a bulk operation (putAll or removeAll). These
    * version tags are retrieved when a bulk op is retried.
    * 

http://git-wip-us.apache.org/repos/asf/geode/blob/16832655/geode-core/src/main/java/org/apache/geode/internal/cache/LocalRegion.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/LocalRegion.java b/geode-core/src/main/java/org/apache/geode/internal/cache/LocalRegion.java
index 8c061b0..2dec53b 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/LocalRegion.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/LocalRegion.java
@@ -505,6 +505,10 @@ public class LocalRegion extends AbstractRegion implements LoaderHelperFactory,
     return new Stopper();
   }
 
+  protected CancelCriterion getStopper() {
+    return this.stopper;
+  }
+
   private final TestCallable testCallable;
 
   /**
@@ -682,10 +686,8 @@ public class LocalRegion extends AbstractRegion implements LoaderHelperFactory,
 
 
   /**
-   * Test method for getting the event tracker.
    * 
-   * this method is for testing only. Other region classes may track events using different
-   * mechanisms than EventTrackers
+   * Other region classes may track events using different mechanisms than EventTrackers
    */
   protected EventTracker getEventTracker() {
     return this.eventTracker;
@@ -3475,6 +3477,10 @@ public class LocalRegion extends AbstractRegion implements LoaderHelperFactory,
     }
   }
 
+  protected boolean getEnableConcurrencyChecks() {
+    return this.concurrencyChecksEnabled;
+  }
+
   /**
    * validate attributes of subregion being created, sent to parent
    *
@@ -6151,8 +6157,12 @@ public class LocalRegion extends AbstractRegion implements LoaderHelperFactory,
       isDup = this.eventTracker.hasSeenEvent(event);
       if (isDup) {
         event.setPossibleDuplicate(true);
-        if (this.concurrencyChecksEnabled && event.getVersionTag() == null) {
-          event.setVersionTag(findVersionTagForClientEvent(event.getEventId()));
+        if (getConcurrencyChecksEnabled() && event.getVersionTag() == null) {
+          if (event.isBulkOpInProgress()) {
+            event.setVersionTag(findVersionTagForClientBulkOp(event.getEventId()));
+          } else {
+            event.setVersionTag(findVersionTagForClientEvent(event.getEventId()));
+          }
         }
       } else {
         // bug #48205 - a retried PR operation may already have a version assigned to it
@@ -6253,9 +6263,9 @@ public class LocalRegion extends AbstractRegion implements LoaderHelperFactory,
     }
   }
 
-  public void recordBulkOpStart(ThreadIdentifier membershipID) {
+  public void recordBulkOpStart(ThreadIdentifier membershipID, EventID eventID) {
     if (this.eventTracker != null && !isTX()) {
-      this.eventTracker.recordBulkOpStart(membershipID);
+      this.eventTracker.recordBulkOpStart(membershipID, eventID);
     }
   }
 

http://git-wip-us.apache.org/repos/asf/geode/blob/16832655/geode-core/src/main/java/org/apache/geode/internal/cache/partitioned/PutAllPRMessage.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/partitioned/PutAllPRMessage.java b/geode-core/src/main/java/org/apache/geode/internal/cache/partitioned/PutAllPRMessage.java
index 27f5aa0..ed1fe0a 100755
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/partitioned/PutAllPRMessage.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/partitioned/PutAllPRMessage.java
@@ -438,7 +438,7 @@ public final class PutAllPRMessage extends PartitionMessageWithDirectReply {
             EventID eventID = putAllPRData[0].getEventID();
             ThreadIdentifier membershipID =
                 new ThreadIdentifier(eventID.getMembershipID(), eventID.getThreadID());
-            bucketRegion.recordBulkOpStart(membershipID);
+            bucketRegion.recordBulkOpStart(membershipID, eventID);
           }
           bucketRegion.waitUntilLocked(keys);
           boolean lockedForPrimary = false;

http://git-wip-us.apache.org/repos/asf/geode/blob/16832655/geode-core/src/main/java/org/apache/geode/internal/cache/partitioned/RemoveAllPRMessage.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/partitioned/RemoveAllPRMessage.java b/geode-core/src/main/java/org/apache/geode/internal/cache/partitioned/RemoveAllPRMessage.java
index f4f6299..0e38ddc 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/partitioned/RemoveAllPRMessage.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/partitioned/RemoveAllPRMessage.java
@@ -434,7 +434,7 @@ public final class RemoveAllPRMessage extends PartitionMessageWithDirectReply {
             EventID eventID = removeAllPRData[0].getEventID();
             ThreadIdentifier membershipID =
                 new ThreadIdentifier(eventID.getMembershipID(), eventID.getThreadID());
-            bucketRegion.recordBulkOpStart(membershipID);
+            bucketRegion.recordBulkOpStart(membershipID, eventID);
           }
           bucketRegion.waitUntilLocked(keys);
           boolean lockedForPrimary = false;

http://git-wip-us.apache.org/repos/asf/geode/blob/16832655/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/ClientProxyMembershipID.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/ClientProxyMembershipID.java b/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/ClientProxyMembershipID.java
index 2cbf63b..2fd508b 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/ClientProxyMembershipID.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/ClientProxyMembershipID.java
@@ -38,7 +38,7 @@ import static org.apache.geode.distributed.ConfigurationProperties.*;
  * 
  * 
  */
-public final class ClientProxyMembershipID
+public class ClientProxyMembershipID
     implements DataSerializableFixedID, Serializable, Externalizable {
 
   private static final Logger logger = LogService.getLogger();

http://git-wip-us.apache.org/repos/asf/geode/blob/16832655/geode-core/src/test/java/org/apache/geode/internal/cache/AbstractDistributedRegionJUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/internal/cache/AbstractDistributedRegionJUnitTest.java b/geode-core/src/test/java/org/apache/geode/internal/cache/AbstractDistributedRegionJUnitTest.java
index ba2f794..a8cbdde 100755
--- a/geode-core/src/test/java/org/apache/geode/internal/cache/AbstractDistributedRegionJUnitTest.java
+++ b/geode-core/src/test/java/org/apache/geode/internal/cache/AbstractDistributedRegionJUnitTest.java
@@ -106,7 +106,8 @@ public abstract class AbstractDistributedRegionJUnitTest {
   protected abstract void verifyDistributeUpdateEntryVersion(DistributedRegion region,
       EntryEventImpl event, int cnt);
 
-  protected DistributedRegion prepare(boolean isConcurrencyChecksEnabled) {
+  protected DistributedRegion prepare(boolean isConcurrencyChecksEnabled,
+      boolean testHasSeenEvent) {
     GemFireCacheImpl cache = Fakes.cache();
 
     // create region attributes and internal region arguments
@@ -122,14 +123,16 @@ public abstract class AbstractDistributedRegionJUnitTest {
     }
 
     doNothing().when(region).notifyGatewaySender(any(), any());
-    doReturn(true).when(region).hasSeenEvent(any(EntryEventImpl.class));
+    if (!testHasSeenEvent) {
+      doReturn(true).when(region).hasSeenEvent(any(EntryEventImpl.class));
+    }
     return region;
   }
 
   @Test
   public void testConcurrencyFalseTagNull() {
     // case 1: concurrencyCheckEanbled = false, version tag is null: distribute
-    DistributedRegion region = prepare(false);
+    DistributedRegion region = prepare(false, false);
     EntryEventImpl event = createDummyEvent(region);
     assertNull(event.getVersionTag());
     doTest(region, event, 1);
@@ -138,7 +141,7 @@ public abstract class AbstractDistributedRegionJUnitTest {
   @Test
   public void testConcurrencyTrueTagNull() {
     // case 2: concurrencyCheckEanbled = true, version tag is null: not to distribute
-    DistributedRegion region = prepare(true);
+    DistributedRegion region = prepare(true, false);
     EntryEventImpl event = createDummyEvent(region);
     assertNull(event.getVersionTag());
     doTest(region, event, 0);
@@ -147,7 +150,7 @@ public abstract class AbstractDistributedRegionJUnitTest {
   @Test
   public void testConcurrencyTrueTagInvalid() {
     // case 3: concurrencyCheckEanbled = true, version tag is invalid: not to distribute
-    DistributedRegion region = prepare(true);
+    DistributedRegion region = prepare(true, false);
     EntryEventImpl event = createDummyEvent(region);
     VersionTag tag = createVersionTag(false);
     event.setVersionTag(tag);
@@ -158,7 +161,7 @@ public abstract class AbstractDistributedRegionJUnitTest {
   @Test
   public void testConcurrencyTrueTagValid() {
     // case 4: concurrencyCheckEanbled = true, version tag is valid: distribute
-    DistributedRegion region = prepare(true);
+    DistributedRegion region = prepare(true, false);
     EntryEventImpl event = createDummyEvent(region);
     VersionTag tag = createVersionTag(true);
     event.setVersionTag(tag);

http://git-wip-us.apache.org/repos/asf/geode/blob/16832655/geode-core/src/test/java/org/apache/geode/internal/cache/DistributedRegionJUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/internal/cache/DistributedRegionJUnitTest.java b/geode-core/src/test/java/org/apache/geode/internal/cache/DistributedRegionJUnitTest.java
index 7525f35..ce21c67 100755
--- a/geode-core/src/test/java/org/apache/geode/internal/cache/DistributedRegionJUnitTest.java
+++ b/geode-core/src/test/java/org/apache/geode/internal/cache/DistributedRegionJUnitTest.java
@@ -14,15 +14,23 @@
  */
 package org.apache.geode.internal.cache;
 
+import static org.junit.Assert.assertTrue;
 import static org.mockito.Matchers.any;
 import static org.mockito.Matchers.anyLong;
 import static org.mockito.Matchers.eq;
-import static org.mockito.Mockito.anyBoolean;
 import static org.mockito.Mockito.*;
 
-import org.junit.experimental.categories.Category;
+import java.util.concurrent.ConcurrentMap;
 
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.apache.geode.cache.Operation;
 import org.apache.geode.cache.RegionAttributes;
+import org.apache.geode.distributed.DistributedMember;
+import org.apache.geode.internal.cache.EventTracker.BulkOpHolder;
+import org.apache.geode.internal.cache.ha.ThreadIdentifier;
+import org.apache.geode.internal.cache.tier.sockets.ClientProxyMembershipID;
+import org.apache.geode.internal.cache.versions.VersionTag;
 import org.apache.geode.test.junit.categories.UnitTest;
 
 @Category(UnitTest.class)
@@ -99,5 +107,47 @@ public class DistributedRegionJUnitTest extends AbstractDistributedRegionJUnitTe
     }
   }
 
+  @Test
+  public void retriedBulkOpGetsSavedVersionTag() {
+    DistributedRegion region = prepare(true, true);
+    DistributedMember member = mock(DistributedMember.class);
+    ClientProxyMembershipID memberId = mock(ClientProxyMembershipID.class);
+    doReturn(false).when(region).isUsedForPartitionedRegionBucket();
+
+    byte[] memId = {1, 2, 3};
+    long threadId = 1;
+    long retrySeqId = 1;
+    ThreadIdentifier tid = new ThreadIdentifier(memId, threadId);
+    EventID retryEventID = new EventID(memId, threadId, retrySeqId);
+    boolean skipCallbacks = true;
+    int size = 2;
+    recordPutAllEvents(region, memId, threadId, skipCallbacks, member, memberId, size);
+    EventTracker eventTracker = region.getEventTracker();
+
+    ConcurrentMap<ThreadIdentifier, BulkOpHolder> map = eventTracker.getRecordedBulkOpVersionTags();
+    BulkOpHolder holder = map.get(tid);
+
+    EntryEventImpl retryEvent = EntryEventImpl.create(region, Operation.PUTALL_CREATE, "key1",
+        "value1", null, false, member, !skipCallbacks, retryEventID);
+    retryEvent.setContext(memberId);
+    retryEvent.setPutAllOperation(mock(DistributedPutAllOperation.class));
+
+    region.hasSeenEvent(retryEvent);
+    assertTrue(retryEvent.getVersionTag().equals(holder.entryVersionTags.get(retryEventID)));
+  }
+
+  protected void recordPutAllEvents(DistributedRegion region, byte[] memId, long threadId,
+      boolean skipCallbacks, DistributedMember member, ClientProxyMembershipID memberId, int size) {
+    EntryEventImpl[] events = new EntryEventImpl[size];
+    EventTracker eventTracker = region.getEventTracker();
+    for (int i = 0; i < size; i++) {
+      events[i] = EntryEventImpl.create(region, Operation.PUTALL_CREATE, "key" + i, "value" + i,
+          null, false, member, !skipCallbacks, new EventID(memId, threadId, i + 1));
+      events[i].setContext(memberId);
+      events[i].setVersionTag(mock(VersionTag.class));
+      eventTracker.recordEvent(events[i]);
+    }
+  }
+
 }
 


[08/10] geode git commit: GEODE-2847: Get correct version tags for retried bulk operations

Posted by zh...@apache.org.
GEODE-2847: Get correct version tags for retried bulk operations

Get correct version tags from recordedBulkOpVersionTags in eventTracker.
Do not remove the recordedBulkOpVersionTags prematurely.
Add the unit test which would fail without the fixes.


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/0dd2552b
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/0dd2552b
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/0dd2552b

Branch: refs/heads/feature/GEM-1353
Commit: 0dd2552b111423ddb1662c30a33c0eacc15eb087
Parents: 480a1e0
Author: eshu <es...@pivotal.io>
Authored: Wed May 3 10:07:53 2017 -0700
Committer: eshu <es...@pivotal.io>
Committed: Wed May 3 10:07:53 2017 -0700

----------------------------------------------------------------------
 .../geode/internal/cache/EventTrackerTest.java  | 94 ++++++++++++++++++++
 1 file changed, 94 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/0dd2552b/geode-core/src/test/java/org/apache/geode/internal/cache/EventTrackerTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/internal/cache/EventTrackerTest.java b/geode-core/src/test/java/org/apache/geode/internal/cache/EventTrackerTest.java
new file mode 100644
index 0000000..d74b3d5
--- /dev/null
+++ b/geode-core/src/test/java/org/apache/geode/internal/cache/EventTrackerTest.java
@@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+
+package org.apache.geode.internal.cache;
+
+import static org.junit.Assert.*;
+import static org.mockito.Mockito.*;
+
+import java.util.concurrent.ConcurrentMap;
+
+import org.junit.Before;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.apache.geode.CancelCriterion;
+import org.apache.geode.cache.DataPolicy;
+import org.apache.geode.cache.Operation;
+import org.apache.geode.cache.RegionAttributes;
+import org.apache.geode.distributed.DistributedMember;
+import org.apache.geode.internal.cache.EventTracker.BulkOpHolder;
+import org.apache.geode.internal.cache.ha.ThreadIdentifier;
+import org.apache.geode.internal.cache.tier.sockets.ClientProxyMembershipID;
+import org.apache.geode.internal.cache.versions.VersionTag;
+import org.apache.geode.test.junit.categories.UnitTest;
+
+@Category(UnitTest.class)
+public class EventTrackerTest {
+  LocalRegion lr;
+  RegionAttributes<?, ?> ra;
+  EntryEventImpl[] events;
+  EventTracker eventTracker;
+  ClientProxyMembershipID memberId;
+  DistributedMember member;
+
+  @Before
+  public void setUp() {
+    lr = mock(LocalRegion.class);
+    ra = mock(RegionAttributes.class);
+    when(lr.createStopper()).thenCallRealMethod();
+    CancelCriterion stopper = lr.createStopper();
+    when(lr.getStopper()).thenReturn(stopper);
+    memberId = mock(ClientProxyMembershipID.class);
+    when(lr.getAttributes()).thenReturn(ra);
+    when(ra.getDataPolicy()).thenReturn(mock(DataPolicy.class));
+    when(lr.getConcurrencyChecksEnabled()).thenReturn(true);
+
+    member = mock(DistributedMember.class);
+  }
+
+  @Test
+  public void retriedBulkOpDoesNotRemoveRecordedBulkOpVersionTags() {
+    byte[] memId = {1, 2, 3};
+    long threadId = 1;
+    long retrySeqId = 1;
+    ThreadIdentifier tid = new ThreadIdentifier(memId, threadId);
+    EventID retryEventID = new EventID(memId, threadId, retrySeqId);
+    boolean skipCallbacks = true;
+    int size = 5;
+    recordPutAllEvents(memId, threadId, skipCallbacks, size);
+
+    ConcurrentMap<ThreadIdentifier, BulkOpHolder> map = eventTracker.getRecordedBulkOpVersionTags();
+    BulkOpHolder holder = map.get(tid);
+    int beforeSize = holder.entryVersionTags.size();
+
+    eventTracker.recordBulkOpStart(tid, retryEventID);
+    map = eventTracker.getRecordedBulkOpVersionTags();
+    holder = map.get(tid);
+    // Retried bulk op should not remove exiting BulkOpVersionTags
+    assertTrue(holder.entryVersionTags.size() == beforeSize);
+  }
+
+  private void recordPutAllEvents(byte[] memId, long threadId, boolean skipCallbacks, int size) {
+    events = new EntryEventImpl[size];
+    eventTracker = new EventTracker(lr);
+    for (int i = 0; i < size; i++) {
+      events[i] = EntryEventImpl.create(lr, Operation.PUTALL_CREATE, "key" + i, "value" + i, null,
+          false, member, !skipCallbacks, new EventID(memId, threadId, i + 1));
+      events[i].setContext(memberId);
+      events[i].setVersionTag(mock(VersionTag.class));
+      eventTracker.recordEvent(events[i]);
+    }
+  }
+}