You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@geode.apache.org by GitBox <gi...@apache.org> on 2020/10/21 14:29:32 UTC

[GitHub] [geode] jdeppe-pivotal opened a new pull request #5647: GEODE-8633: Add concurrency tests for Redis HDEL

jdeppe-pivotal opened a new pull request #5647:
URL: https://github.com/apache/geode/pull/5647


   - Also add the ability for ConcurrentLoopingThreads to be run
     asynchronously.
   
   Authored-by: Jens Deppe <jd...@vmware.com>
   
   Thank you for submitting a contribution to Apache Geode.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message?
   
   - [ ] Has your PR been rebased against the latest commit within the target branch (typically `develop`)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   - [ ] Does `gradlew build` run cleanly?
   
   - [ ] Have you written or updated unit tests to verify your changes?
   
   - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)?
   
   ### Note:
   Please ensure that once the PR is submitted, check Concourse for build issues and
   submit an update to your PR as soon as possible. If you need help, please send an
   email to dev@geode.apache.org.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [geode] jhutchison commented on pull request #5647: GEODE-8633: Add concurrency tests for Redis HDEL

Posted by GitBox <gi...@apache.org>.
jhutchison commented on pull request #5647:
URL: https://github.com/apache/geode/pull/5647#issuecomment-713692627


   Not sure if this is overkill.  but I noticed that there's not a test to check that the items are no longer in the region after the   hdel occurs.   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [geode] jdeppe-pivotal merged pull request #5647: GEODE-8633: Add concurrency tests for Redis HDEL

Posted by GitBox <gi...@apache.org>.
jdeppe-pivotal merged pull request #5647:
URL: https://github.com/apache/geode/pull/5647


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [geode] jdeppe-pivotal commented on a change in pull request #5647: GEODE-8633: Add concurrency tests for Redis HDEL

Posted by GitBox <gi...@apache.org>.
jdeppe-pivotal commented on a change in pull request #5647:
URL: https://github.com/apache/geode/pull/5647#discussion_r509608124



##########
File path: geode-redis/src/distributedTest/java/org/apache/geode/redis/internal/executor/hash/HdelDUnitTest.java
##########
@@ -0,0 +1,165 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+
+package org.apache.geode.redis.internal.executor.hash;
+
+import static org.apache.geode.distributed.ConfigurationProperties.REDIS_PORT;
+import static org.assertj.core.api.Assertions.assertThat;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicLong;
+
+import io.lettuce.core.ClientOptions;
+import io.lettuce.core.RedisClient;
+import io.lettuce.core.api.StatefulRedisConnection;
+import io.lettuce.core.api.sync.RedisCommands;
+import io.lettuce.core.resource.ClientResources;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.Test;
+
+import org.apache.geode.internal.AvailablePortHelper;
+import org.apache.geode.redis.ConcurrentLoopingThreads;
+import org.apache.geode.redis.session.springRedisTestApplication.config.DUnitSocketAddressResolver;
+import org.apache.geode.test.dunit.rules.MemberVM;
+import org.apache.geode.test.dunit.rules.RedisClusterStartupRule;
+import org.apache.geode.test.junit.rules.ExecutorServiceRule;
+
+public class HdelDUnitTest {
+
+  @ClassRule
+  public static RedisClusterStartupRule cluster = new RedisClusterStartupRule();
+
+  @ClassRule
+  public static ExecutorServiceRule executor = new ExecutorServiceRule();
+
+  private static final int HASH_SIZE = 50000;
+  private static MemberVM locator;
+  private static MemberVM server1;
+  private static MemberVM server2;
+  private static int[] redisPorts;
+  private static RedisCommands<String, String> lettuce;
+  private static StatefulRedisConnection<String, String> connection;
+  private static ClientResources resources;
+
+  @BeforeClass
+  public static void classSetup() {
+    redisPorts = AvailablePortHelper.getRandomAvailableTCPPorts(3);
+
+    String redisPort1 = "" + redisPorts[0];
+    String redisPort2 = "" + redisPorts[1];
+
+    locator = cluster.startLocatorVM(0);
+
+    server1 = startRedisVM(1, redisPorts[0]);
+    server2 = startRedisVM(2, redisPorts[1]);
+
+    DUnitSocketAddressResolver dnsResolver =
+        new DUnitSocketAddressResolver(new String[] {redisPort2, redisPort1});
+
+    resources = ClientResources.builder()
+        .socketAddressResolver(dnsResolver)
+        .build();
+
+    RedisClient redisClient = RedisClient.create(resources, "redis://localhost");
+    redisClient.setOptions(ClientOptions.builder()
+        .autoReconnect(true)
+        .build());
+    connection = redisClient.connect();
+    lettuce = connection.sync();
+  }
+
+  private static MemberVM startRedisVM(int vmID, int redisPort) {
+    int locatorPort = locator.getPort();
+
+    return cluster.startRedisVM(vmID, x -> x
+        .withConnectionToLocator(locatorPort)
+        .withProperty(REDIS_PORT, "" + redisPort));
+  }
+
+  @Before
+  public void testSetup() {
+    lettuce.flushall();
+  }
+
+  @AfterClass
+  public static void tearDown() throws Exception {
+    resources.shutdown().get();
+    connection.close();
+    server1.stop();
+    server2.stop();
+  }
+
+  @Test
+  public void testConcurrentHDelReturnExceptedNumberOfDeletions() {

Review comment:
       Thanks! Done.

##########
File path: geode-redis/src/distributedTest/java/org/apache/geode/redis/internal/executor/hash/HdelDUnitTest.java
##########
@@ -0,0 +1,165 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+
+package org.apache.geode.redis.internal.executor.hash;
+
+import static org.apache.geode.distributed.ConfigurationProperties.REDIS_PORT;
+import static org.assertj.core.api.Assertions.assertThat;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicLong;
+
+import io.lettuce.core.ClientOptions;
+import io.lettuce.core.RedisClient;
+import io.lettuce.core.api.StatefulRedisConnection;
+import io.lettuce.core.api.sync.RedisCommands;
+import io.lettuce.core.resource.ClientResources;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.Test;
+
+import org.apache.geode.internal.AvailablePortHelper;
+import org.apache.geode.redis.ConcurrentLoopingThreads;
+import org.apache.geode.redis.session.springRedisTestApplication.config.DUnitSocketAddressResolver;
+import org.apache.geode.test.dunit.rules.MemberVM;
+import org.apache.geode.test.dunit.rules.RedisClusterStartupRule;
+import org.apache.geode.test.junit.rules.ExecutorServiceRule;
+
+public class HdelDUnitTest {
+
+  @ClassRule
+  public static RedisClusterStartupRule cluster = new RedisClusterStartupRule();
+
+  @ClassRule
+  public static ExecutorServiceRule executor = new ExecutorServiceRule();
+
+  private static final int HASH_SIZE = 50000;
+  private static MemberVM locator;
+  private static MemberVM server1;
+  private static MemberVM server2;
+  private static int[] redisPorts;
+  private static RedisCommands<String, String> lettuce;
+  private static StatefulRedisConnection<String, String> connection;
+  private static ClientResources resources;
+
+  @BeforeClass
+  public static void classSetup() {
+    redisPorts = AvailablePortHelper.getRandomAvailableTCPPorts(3);
+
+    String redisPort1 = "" + redisPorts[0];
+    String redisPort2 = "" + redisPorts[1];
+
+    locator = cluster.startLocatorVM(0);
+
+    server1 = startRedisVM(1, redisPorts[0]);
+    server2 = startRedisVM(2, redisPorts[1]);
+
+    DUnitSocketAddressResolver dnsResolver =
+        new DUnitSocketAddressResolver(new String[] {redisPort2, redisPort1});
+
+    resources = ClientResources.builder()
+        .socketAddressResolver(dnsResolver)
+        .build();
+
+    RedisClient redisClient = RedisClient.create(resources, "redis://localhost");
+    redisClient.setOptions(ClientOptions.builder()
+        .autoReconnect(true)
+        .build());
+    connection = redisClient.connect();
+    lettuce = connection.sync();
+  }
+
+  private static MemberVM startRedisVM(int vmID, int redisPort) {
+    int locatorPort = locator.getPort();
+
+    return cluster.startRedisVM(vmID, x -> x
+        .withConnectionToLocator(locatorPort)
+        .withProperty(REDIS_PORT, "" + redisPort));
+  }
+
+  @Before
+  public void testSetup() {
+    lettuce.flushall();
+  }
+
+  @AfterClass
+  public static void tearDown() throws Exception {
+    resources.shutdown().get();
+    connection.close();
+    server1.stop();
+    server2.stop();
+  }
+
+  @Test
+  public void testConcurrentHDelReturnExceptedNumberOfDeletions() {
+    AtomicLong client1Deletes = new AtomicLong();
+    AtomicLong client2Deletes = new AtomicLong();
+
+    String key = "HSET";
+
+    Map<String, String> setUpData =
+        makeHashMap(HASH_SIZE, "field", "value");
+
+    lettuce.hset(key, setUpData);
+
+    new ConcurrentLoopingThreads(HASH_SIZE,
+        i -> {
+          long deleted = lettuce.hdel(key, "field" + i, "value" + i);
+          client1Deletes.addAndGet(deleted);
+        },
+        i -> {
+          long deleted = lettuce.hdel(key, "field" + i, "value" + i);
+          client2Deletes.addAndGet(deleted);
+        })
+            .run();
+
+    assertThat(client1Deletes.get() + client2Deletes.get()).isEqualTo(HASH_SIZE);
+  }
+
+  @Test
+  public void testConcurrentDel_whenServerCrashesAndRestarts() {

Review comment:
       Done




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [geode] sabbey37 commented on a change in pull request #5647: GEODE-8633: Add concurrency tests for Redis HDEL

Posted by GitBox <gi...@apache.org>.
sabbey37 commented on a change in pull request #5647:
URL: https://github.com/apache/geode/pull/5647#discussion_r509396258



##########
File path: geode-redis/src/distributedTest/java/org/apache/geode/redis/internal/executor/hash/HdelDUnitTest.java
##########
@@ -0,0 +1,165 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+
+package org.apache.geode.redis.internal.executor.hash;
+
+import static org.apache.geode.distributed.ConfigurationProperties.REDIS_PORT;
+import static org.assertj.core.api.Assertions.assertThat;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicLong;
+
+import io.lettuce.core.ClientOptions;
+import io.lettuce.core.RedisClient;
+import io.lettuce.core.api.StatefulRedisConnection;
+import io.lettuce.core.api.sync.RedisCommands;
+import io.lettuce.core.resource.ClientResources;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.Test;
+
+import org.apache.geode.internal.AvailablePortHelper;
+import org.apache.geode.redis.ConcurrentLoopingThreads;
+import org.apache.geode.redis.session.springRedisTestApplication.config.DUnitSocketAddressResolver;
+import org.apache.geode.test.dunit.rules.MemberVM;
+import org.apache.geode.test.dunit.rules.RedisClusterStartupRule;
+import org.apache.geode.test.junit.rules.ExecutorServiceRule;
+
+public class HdelDUnitTest {
+
+  @ClassRule
+  public static RedisClusterStartupRule cluster = new RedisClusterStartupRule();
+
+  @ClassRule
+  public static ExecutorServiceRule executor = new ExecutorServiceRule();
+
+  private static final int HASH_SIZE = 50000;
+  private static MemberVM locator;
+  private static MemberVM server1;
+  private static MemberVM server2;
+  private static int[] redisPorts;
+  private static RedisCommands<String, String> lettuce;
+  private static StatefulRedisConnection<String, String> connection;
+  private static ClientResources resources;
+
+  @BeforeClass
+  public static void classSetup() {
+    redisPorts = AvailablePortHelper.getRandomAvailableTCPPorts(3);
+
+    String redisPort1 = "" + redisPorts[0];
+    String redisPort2 = "" + redisPorts[1];
+
+    locator = cluster.startLocatorVM(0);
+
+    server1 = startRedisVM(1, redisPorts[0]);
+    server2 = startRedisVM(2, redisPorts[1]);
+
+    DUnitSocketAddressResolver dnsResolver =
+        new DUnitSocketAddressResolver(new String[] {redisPort2, redisPort1});
+
+    resources = ClientResources.builder()
+        .socketAddressResolver(dnsResolver)
+        .build();
+
+    RedisClient redisClient = RedisClient.create(resources, "redis://localhost");
+    redisClient.setOptions(ClientOptions.builder()
+        .autoReconnect(true)
+        .build());
+    connection = redisClient.connect();
+    lettuce = connection.sync();
+  }
+
+  private static MemberVM startRedisVM(int vmID, int redisPort) {
+    int locatorPort = locator.getPort();
+
+    return cluster.startRedisVM(vmID, x -> x
+        .withConnectionToLocator(locatorPort)
+        .withProperty(REDIS_PORT, "" + redisPort));
+  }
+
+  @Before
+  public void testSetup() {
+    lettuce.flushall();
+  }
+
+  @AfterClass
+  public static void tearDown() throws Exception {
+    resources.shutdown().get();
+    connection.close();
+    server1.stop();
+    server2.stop();
+  }
+
+  @Test
+  public void testConcurrentHDelReturnExceptedNumberOfDeletions() {

Review comment:
       Since you'll have to re-trigger anyway, the name could be changed to correct some grammatical errors/fit in with the test name below it: `testConcurrentHDel_returnsExpectedNumberOfDeletions`

##########
File path: geode-redis/src/distributedTest/java/org/apache/geode/redis/internal/executor/hash/HdelDUnitTest.java
##########
@@ -0,0 +1,165 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+
+package org.apache.geode.redis.internal.executor.hash;
+
+import static org.apache.geode.distributed.ConfigurationProperties.REDIS_PORT;
+import static org.assertj.core.api.Assertions.assertThat;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicLong;
+
+import io.lettuce.core.ClientOptions;
+import io.lettuce.core.RedisClient;
+import io.lettuce.core.api.StatefulRedisConnection;
+import io.lettuce.core.api.sync.RedisCommands;
+import io.lettuce.core.resource.ClientResources;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.Test;
+
+import org.apache.geode.internal.AvailablePortHelper;
+import org.apache.geode.redis.ConcurrentLoopingThreads;
+import org.apache.geode.redis.session.springRedisTestApplication.config.DUnitSocketAddressResolver;
+import org.apache.geode.test.dunit.rules.MemberVM;
+import org.apache.geode.test.dunit.rules.RedisClusterStartupRule;
+import org.apache.geode.test.junit.rules.ExecutorServiceRule;
+
+public class HdelDUnitTest {
+
+  @ClassRule
+  public static RedisClusterStartupRule cluster = new RedisClusterStartupRule();
+
+  @ClassRule
+  public static ExecutorServiceRule executor = new ExecutorServiceRule();
+
+  private static final int HASH_SIZE = 50000;
+  private static MemberVM locator;
+  private static MemberVM server1;
+  private static MemberVM server2;
+  private static int[] redisPorts;
+  private static RedisCommands<String, String> lettuce;
+  private static StatefulRedisConnection<String, String> connection;
+  private static ClientResources resources;
+
+  @BeforeClass
+  public static void classSetup() {
+    redisPorts = AvailablePortHelper.getRandomAvailableTCPPorts(3);
+
+    String redisPort1 = "" + redisPorts[0];
+    String redisPort2 = "" + redisPorts[1];
+
+    locator = cluster.startLocatorVM(0);
+
+    server1 = startRedisVM(1, redisPorts[0]);
+    server2 = startRedisVM(2, redisPorts[1]);
+
+    DUnitSocketAddressResolver dnsResolver =
+        new DUnitSocketAddressResolver(new String[] {redisPort2, redisPort1});
+
+    resources = ClientResources.builder()
+        .socketAddressResolver(dnsResolver)
+        .build();
+
+    RedisClient redisClient = RedisClient.create(resources, "redis://localhost");
+    redisClient.setOptions(ClientOptions.builder()
+        .autoReconnect(true)
+        .build());
+    connection = redisClient.connect();
+    lettuce = connection.sync();
+  }
+
+  private static MemberVM startRedisVM(int vmID, int redisPort) {
+    int locatorPort = locator.getPort();
+
+    return cluster.startRedisVM(vmID, x -> x
+        .withConnectionToLocator(locatorPort)
+        .withProperty(REDIS_PORT, "" + redisPort));
+  }
+
+  @Before
+  public void testSetup() {
+    lettuce.flushall();
+  }
+
+  @AfterClass
+  public static void tearDown() throws Exception {
+    resources.shutdown().get();
+    connection.close();
+    server1.stop();
+    server2.stop();
+  }
+
+  @Test
+  public void testConcurrentHDelReturnExceptedNumberOfDeletions() {
+    AtomicLong client1Deletes = new AtomicLong();
+    AtomicLong client2Deletes = new AtomicLong();
+
+    String key = "HSET";
+
+    Map<String, String> setUpData =
+        makeHashMap(HASH_SIZE, "field", "value");
+
+    lettuce.hset(key, setUpData);
+
+    new ConcurrentLoopingThreads(HASH_SIZE,
+        i -> {
+          long deleted = lettuce.hdel(key, "field" + i, "value" + i);
+          client1Deletes.addAndGet(deleted);
+        },
+        i -> {
+          long deleted = lettuce.hdel(key, "field" + i, "value" + i);
+          client2Deletes.addAndGet(deleted);
+        })
+            .run();
+
+    assertThat(client1Deletes.get() + client2Deletes.get()).isEqualTo(HASH_SIZE);
+  }
+
+  @Test
+  public void testConcurrentDel_whenServerCrashesAndRestarts() {

Review comment:
       Since you have to re-trigger anyway, you could change the name to HDel instead of Del and better fit with the test name above: `testConcurrentHDel_whenServerCrashesAndRestarts_deletesAllHashFieldsAndValues`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [geode] jdeppe-pivotal commented on pull request #5647: GEODE-8633: Add concurrency tests for Redis HDEL

Posted by GitBox <gi...@apache.org>.
jdeppe-pivotal commented on pull request #5647:
URL: https://github.com/apache/geode/pull/5647#issuecomment-713824161


   > Not sure if this is overkill. but I noticed that there's not a test to check that the items are no longer in the region after the hdel occurs.
   
   I've added that assertion. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org