You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@geode.apache.org by on...@apache.org on 2019/08/20 22:20:39 UTC

[geode] branch release/1.9.1 updated (2b6a954 -> 8e541c5)

This is an automated email from the ASF dual-hosted git repository.

onichols pushed a change to branch release/1.9.1
in repository https://gitbox.apache.org/repos/asf/geode.git.


 discard 2b6a954  Upgraded version number for releasing 1.9.1
 discard ded8d51  Remove unattended-upgrades and autoremove unnecessary stuff. (#3881)
 discard 1ac6f38  GEODE-6734: Change packer image resources and scripts to Bionic
 discard 8355f5d  [GEODE-7027] Use cygwin to get rsync instead of chocolatey directly. (#3863)
 discard 43debcd  Keep newest packer but install specific version as well. (#3611)
 discard 9842ae0  Fix packer configuration for windows image. (#3837)
 discard 3c5f105  Add flags to allow for local building. (#3830)
 discard a137e27  Update windows image and tweaks to support it. (#3649)
 discard 9bdc01a  Removing lines we hopefully don't need anymore. (#3613)
 discard 0423b77  Remove hopefully now spurious line. (#3612)
 discard e92f729  Update windows source image family. (#3602)
 discard 743f274  move JDK11 testing from OpenJDK to AdoptOpenJDK going forward
 discard 50b3171  GEODE-7050: Use Log4jAgent only if Log4j is using Log4jProvider (#3892)
 discard 3538341  GEODE-6959: Prevent NPE in GMSMembershipManager for null AlertAppender (#3899)
 discard 6303488  GEODE-7058: Mark log4j-core optional in geode-core
 discard 1bbbb7d  Upgraded version number for releasing 1.9.1
 discard 3f0498f  fixing spotless and pmd errors from reverts
 discard 4a11e4d  Revert "GEODE-2113 Implement SSL over NIO"
 discard 6d18510  Revert "GEODE-2113 implement SSL over NIO"
 discard 8acc9aa  Revert "GEODE-6389 CI Failure: ConcurrentWANPropagation_1_DUnitTest.testReplicatedSerialPropagation"
 discard 4aed345  Revert "GEODE-6389 CI Failure: ConcurrentWANPropagation_1_DUnitTest.testReplicatedSerialPropagation_withoutRemoteSite"
 discard e768c8c  Revert "GEODE-6468 [CI Failure] ClusterCommunicationsDUnitTest fails on createEntryAndVerifyUpdate"
     add 41cd486  GEODE-6570 processing of cached join request delays view installation
     add 85e1362  GEODE-6559: PdxInstance.getObject() is using class from older jar in case of Reconnect (#3353)
     add 75ac498  GEODE-6589: Parameterize gradle project group for use in GradleBuildWithGeodeCoreAcceptanceTest (#3395)
     add 4a86807  Fix geoge-book redirect url for 1.9.0
     add 5d2b8d9  GEODE-3948 fixing handling of sotimeout in Message.receive()
     add 6b05cae  Ignore GrgitException when building from src dist
     add e0c29b1  GEODE-6195 putIfAbsent may get a returned value caused by the same operation due to retry
     add ed13a72  GEODE-6664 CI failure: org.apache.geode.ClusterCommunicationsDUnitTest.receiveBigResponse
     add 097353f  GEODE-6662 NioPlainEngine.ensureWrappedCapacity
     add 0fea07a  GEODE-6423 availability checks sometimes immediately initiate removal
     add 3601d83  Fixes CI benchmark baseline selection.
     add 40ebccd  adding my PGP block as instructed in release steps
     add 7d7f8f1  GEODE-6630: move allBucketsRecoveredFromDisk count down latch (#3477)
     add 8e92509  Use branch of benchmarks targeted for release/1.9.0.
     add b912ac7  Fixes benchmarks branch
     add bf4ee80  adding my GPG key as per release instructions
     add c0a73d1  bump the geode version in the Dockerfile
     new 92b3ecc  Upgraded version number for releasing 1.9.1
     new 5dfad63  Revert "GEODE-6468 [CI Failure] ClusterCommunicationsDUnitTest fails on createEntryAndVerifyUpdate"
     new 7b956f5  Revert "GEODE-6389 CI Failure: ConcurrentWANPropagation_1_DUnitTest.testReplicatedSerialPropagation_withoutRemoteSite"
     new 8bab401  Revert "GEODE-6389 CI Failure: ConcurrentWANPropagation_1_DUnitTest.testReplicatedSerialPropagation"
     new 3ae6dd8  Revert "GEODE-2113 implement SSL over NIO"
     new a42e8c7  Revert "GEODE-2113 Implement SSL over NIO"
     new b4cd0ba  fixing spotless and pmd errors from reverts
     new dea7136  GEODE-7058: Mark log4j-core optional in geode-core
     new ff23094  GEODE-6959: Prevent NPE in GMSMembershipManager for null AlertAppender (#3899)
     new bf47030  GEODE-7050: Use Log4jAgent only if Log4j is using Log4jProvider (#3892)
     new 9ca4dca  move JDK11 testing from OpenJDK to AdoptOpenJDK going forward
     new 4e9171f  Update windows source image family. (#3602)
     new 86d0765  Remove hopefully now spurious line. (#3612)
     new 558a4e4  Removing lines we hopefully don't need anymore. (#3613)
     new d33bda5  Update windows image and tweaks to support it. (#3649)
     new 1668eb6  Add flags to allow for local building. (#3830)
     new 97acee9  Fix packer configuration for windows image. (#3837)
     new 6ad1635  Keep newest packer but install specific version as well. (#3611)
     new be95b69  [GEODE-7027] Use cygwin to get rsync instead of chocolatey directly. (#3863)
     new 5a74697  GEODE-6734: Change packer image resources and scripts to Bionic
     new 8e541c5  Remove unattended-upgrades and autoremove unnecessary stuff. (#3881)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (2b6a954)
            \
             N -- N -- N   refs/heads/release/1.9.1 (8e541c5)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 21 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 KEYS                                               |  58 ++++++++
 build.gradle                                       |   2 +-
 ci/pipelines/geode-build/jinja.template.yml        |   3 +-
 ci/pipelines/shared/jinja.variables.yml            |   5 +-
 ci/scripts/run_benchmarks.sh                       |   1 -
 docker/Dockerfile                                  |   4 +-
 geode-assembly/build.gradle                        |  10 +-
 .../GradleBuildWithGeodeCoreAcceptanceTest.java    |   7 +-
 .../gradle-test-projects/management/build.gradle   |   4 +-
 ...rConfigServerRestartWithJarDeployDUnitTest.java | 134 ++++++++++++++++++
 ...erConfigServerRestartWithJarDeployFunction.java |  63 +++++++++
 .../cache/RetryPutIfAbsentIntegrationTest.java     |  87 ++++++++++++
 .../gms/fd/GMSHealthMonitorJUnitTest.java          |   7 +-
 .../gms/membership/GMSJoinLeaveJUnitTest.java      |  28 ++++
 .../membership/gms/fd/GMSHealthMonitor.java        |  34 +++--
 .../membership/gms/membership/GMSJoinLeave.java    |  13 +-
 .../org/apache/geode/internal/JarDeployer.java     |   2 +-
 .../apache/geode/internal/cache/LocalRegion.java   |  62 +--------
 .../internal/cache/PRHARedundancyProvider.java     |  90 ++++---------
 .../cache/event/DistributedEventTracker.java       |   7 +
 .../geode/internal/cache/event/EventTracker.java   |   5 +
 .../cache/event/NonDistributedEventTracker.java    |   5 +
 .../geode/internal/cache/map/RegionMapPut.java     |  45 ++++++-
 ...yLogger.java => PersistentBucketRecoverer.java} | 111 ++++++++++++---
 .../geode/internal/cache/tier/sockets/Message.java |  26 ++--
 .../messages/ConfigurationResponse.java            |  11 +-
 .../geode/pdx/internal/PeerTypeRegistration.java   |   7 +
 .../geode/internal/cache/LocalRegionTest.java      | 150 ---------------------
 .../internal/cache/PRHARedundancyProviderTest.java |  22 ++-
 .../geode/internal/cache/map/RegionMapPutTest.java |  21 +++
 .../partitioned/PersistentBucketRecovererTest.java |  69 ++++++++++
 .../cache/tier/sockets/MessageJUnitTest.java       |  46 +++++++
 .../configuration/ClusterConfigTestBase.java       |   2 +-
 .../geode/test/junit/rules/MemberStarterRule.java  |   4 +
 .../geode/test/junit/rules/ServerStarterRule.java  |  12 +-
 .../apache/geode/test/compiler/JarBuilderTest.java |  12 +-
 .../org/apache/geode/test/compiler/JarBuilder.java |   9 ++
 37 files changed, 824 insertions(+), 354 deletions(-)
 create mode 100644 geode-core/src/distributedTest/java/org/apache/geode/management/internal/configuration/ClusterConfigServerRestartWithJarDeployDUnitTest.java
 create mode 100644 geode-core/src/distributedTest/resources/ClusterConfigServerRestartWithJarDeployFunction.java
 create mode 100644 geode-core/src/integrationTest/java/org/apache/geode/cache/RetryPutIfAbsentIntegrationTest.java
 rename geode-core/src/main/java/org/apache/geode/internal/cache/partitioned/{RedundancyLogger.java => PersistentBucketRecoverer.java} (81%)
 create mode 100644 geode-core/src/test/java/org/apache/geode/internal/cache/partitioned/PersistentBucketRecovererTest.java


[geode] 04/21: Revert "GEODE-6389 CI Failure: ConcurrentWANPropagation_1_DUnitTest.testReplicatedSerialPropagation"

Posted by on...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

onichols pushed a commit to branch release/1.9.1
in repository https://gitbox.apache.org/repos/asf/geode.git

commit 8bab401d91720cf0e4b2412d307611fd6a476672
Author: Bruce Schuchardt <bs...@pivotal.io>
AuthorDate: Thu Jun 27 14:35:11 2019 -0700

    Revert "GEODE-6389 CI Failure: ConcurrentWANPropagation_1_DUnitTest.testReplicatedSerialPropagation"
    
    This reverts commit dd6cde77787b7922f16c6b42a0b6ce9ce874b025.
---
 .../src/main/java/org/apache/geode/internal/net/SocketCloser.java     | 2 +-
 .../src/main/java/org/apache/geode/internal/tcp/Connection.java       | 4 ++++
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/geode-core/src/main/java/org/apache/geode/internal/net/SocketCloser.java b/geode-core/src/main/java/org/apache/geode/internal/net/SocketCloser.java
index f083d50..cfa3991 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/net/SocketCloser.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/net/SocketCloser.java
@@ -169,7 +169,7 @@ public class SocketCloser {
    *
    * @param socket the socket to close
    * @param address identifies who the socket is connected to
-   * @param extra an optional Runnable with stuff to execute before the socket is closed
+   * @param extra an optional Runnable with stuff to execute in the async thread
    */
   public void asyncClose(final Socket socket, final String address, final Runnable extra) {
     if (socket == null || socket.isClosed()) {
diff --git a/geode-core/src/main/java/org/apache/geode/internal/tcp/Connection.java b/geode-core/src/main/java/org/apache/geode/internal/tcp/Connection.java
index 7fcbee5..247819a 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/tcp/Connection.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/tcp/Connection.java
@@ -1585,6 +1585,10 @@ public class Connection implements Runnable {
         }
         asyncClose(false);
         this.owner.removeAndCloseThreadOwnedSockets();
+
+        if (this.isSharedResource()) {
+          releaseInputBuffer();
+        }
       }
       // make sure that if the reader thread exits we notify a thread waiting
       // for the handshake.


[geode] 06/21: Revert "GEODE-2113 Implement SSL over NIO"

Posted by on...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

onichols pushed a commit to branch release/1.9.1
in repository https://gitbox.apache.org/repos/asf/geode.git

commit a42e8c7f0e4649b24f0de3f0bc68c033e0c01fae
Author: Bruce Schuchardt <bs...@pivotal.io>
AuthorDate: Thu Jun 27 14:44:30 2019 -0700

    Revert "GEODE-2113 Implement SSL over NIO"
    
    This reverts commit 33077b3dab41260c70cece5b4f7ff1c42501b01c.
---
 .../geode/ClusterCommunicationsDUnitTest.java      |  421 ----
 .../CacheServerSSLConnectionDUnitTest.java         |  101 +-
 ...tServerHostNameVerificationDistributedTest.java |    5 -
 ...ToDataThrowsRuntimeExceptionRegressionTest.java |    3 +
 .../internal/cache/ConcurrentMapOpsDUnitTest.java  |    1 +
 .../internal/net/SSLSocketIntegrationTest.java     |  167 +-
 .../distributed/internal/DistributionStats.java    |    2 +-
 .../distributed/internal/direct/DirectChannel.java |   26 +-
 .../membership/gms/membership/GMSJoinLeave.java    |    2 +-
 .../membership/gms/mgr/GMSMembershipManager.java   |   10 +
 .../distributed/internal/tcpserver/TcpClient.java  |    4 -
 .../geode/internal/cache/EntryEventImpl.java       |   28 +-
 .../geode/internal/cache/PRQueryProcessor.java     |    2 +-
 .../apache/geode/internal/cache/properties.html    |   11 +
 .../monitoring/ThreadsMonitoringProcess.java       |    3 +-
 .../monitoring/executor/AbstractExecutor.java      |    7 +-
 .../org/apache/geode/internal/net/NioFilter.java   |   87 -
 .../apache/geode/internal/net/NioPlainEngine.java  |  125 --
 .../apache/geode/internal/net/NioSslEngine.java    |  415 ----
 .../apache/geode/internal/net/SocketCreator.java   |  118 +-
 .../apache/geode/internal/statistics/VMStats.java  |    2 +-
 .../statistics/platform/LinuxProcFsStatistics.java |  207 +-
 .../apache/geode/internal/stats50/VMStats50.java   |    2 +-
 .../geode/internal/{net => tcp}/Buffers.java       |  124 +-
 .../org/apache/geode/internal/tcp/Connection.java  | 2061 ++++++++++++--------
 .../apache/geode/internal/tcp/ConnectionTable.java |   14 +-
 .../apache/geode/internal/tcp/MsgDestreamer.java   |   15 +
 .../apache/geode/internal/tcp/MsgOutputStream.java |    3 +-
 .../org/apache/geode/internal/tcp/MsgReader.java   |  146 +-
 .../org/apache/geode/internal/tcp/MsgStreamer.java |    1 -
 .../apache/geode/internal/tcp/NIOMsgReader.java    |  109 ++
 ...eerConnectionFactory.java => OioMsgReader.java} |   30 +-
 .../geode/internal/tcp/PeerConnectionFactory.java  |    1 -
 .../org/apache/geode/internal/tcp/TCPConduit.java  |  240 ++-
 .../apache/geode/internal/util/DscodeHelper.java   |    6 +-
 .../management/internal/FederatingManager.java     |    2 +-
 .../sanctioned-geode-core-serializables.txt        |    1 -
 .../org/apache/geode/internal/net/BuffersTest.java |  108 -
 .../geode/internal/net/NioPlainEngineTest.java     |  156 --
 .../geode/internal/net/NioSslEngineTest.java       |  605 ------
 .../geode/internal/tcp/ConnectionJUnitTest.java    |   16 +-
 .../apache/geode/internal/tcp/ConnectionTest.java  |    4 +-
 .../util/PluckStacksJstackGeneratedDump.txt        |   18 +-
 .../geode/test/dunit/internal/ProcessManager.java  |   16 +-
 .../resources/org/apache/geode/server.keystore     |  Bin 1256 -> 0 bytes
 .../geode/internal/cache/wan/WANTestBase.java      |    1 +
 ...lGatewaySenderDistributedDeadlockDUnitTest.java |  229 +--
 47 files changed, 2066 insertions(+), 3589 deletions(-)

diff --git a/geode-core/src/distributedTest/java/org/apache/geode/ClusterCommunicationsDUnitTest.java b/geode-core/src/distributedTest/java/org/apache/geode/ClusterCommunicationsDUnitTest.java
deleted file mode 100644
index c970f77..0000000
--- a/geode-core/src/distributedTest/java/org/apache/geode/ClusterCommunicationsDUnitTest.java
+++ /dev/null
@@ -1,421 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
- * agreements. See the NOTICE file distributed with this work for additional information regarding
- * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance with the License. You may obtain a
- * copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software distributed under the License
- * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
- * or implied. See the License for the specific language governing permissions and limitations under
- * the License.
- */
-package org.apache.geode;
-
-import static org.apache.geode.distributed.ConfigurationProperties.CONSERVE_SOCKETS;
-import static org.apache.geode.distributed.ConfigurationProperties.ENABLE_CLUSTER_CONFIGURATION;
-import static org.apache.geode.distributed.ConfigurationProperties.LOCATORS;
-import static org.apache.geode.distributed.ConfigurationProperties.NAME;
-import static org.apache.geode.distributed.ConfigurationProperties.SOCKET_BUFFER_SIZE;
-import static org.apache.geode.distributed.ConfigurationProperties.SOCKET_LEASE_TIME;
-import static org.apache.geode.distributed.ConfigurationProperties.SSL_ENABLED_COMPONENTS;
-import static org.apache.geode.distributed.ConfigurationProperties.SSL_KEYSTORE;
-import static org.apache.geode.distributed.ConfigurationProperties.SSL_KEYSTORE_PASSWORD;
-import static org.apache.geode.distributed.ConfigurationProperties.SSL_PROTOCOLS;
-import static org.apache.geode.distributed.ConfigurationProperties.SSL_REQUIRE_AUTHENTICATION;
-import static org.apache.geode.distributed.ConfigurationProperties.SSL_TRUSTSTORE;
-import static org.apache.geode.distributed.ConfigurationProperties.SSL_TRUSTSTORE_PASSWORD;
-import static org.apache.geode.distributed.ConfigurationProperties.USE_CLUSTER_CONFIGURATION;
-import static org.apache.geode.internal.DataSerializableFixedID.SERIAL_ACKED_MESSAGE;
-import static org.apache.geode.test.awaitility.GeodeAwaitility.await;
-import static org.apache.geode.test.awaitility.GeodeAwaitility.getTimeout;
-import static org.assertj.core.api.Assertions.assertThat;
-
-import java.io.DataInput;
-import java.io.DataOutput;
-import java.io.File;
-import java.io.IOException;
-import java.util.Arrays;
-import java.util.Collection;
-import java.util.Collections;
-import java.util.List;
-import java.util.Properties;
-import java.util.Set;
-import java.util.concurrent.TimeUnit;
-
-import org.junit.Assert;
-import org.junit.Before;
-import org.junit.Rule;
-import org.junit.Test;
-import org.junit.experimental.categories.Category;
-import org.junit.runner.RunWith;
-import org.junit.runners.Parameterized;
-
-import org.apache.geode.cache.Cache;
-import org.apache.geode.cache.CacheFactory;
-import org.apache.geode.cache.RegionShortcut;
-import org.apache.geode.distributed.DistributedMember;
-import org.apache.geode.distributed.Locator;
-import org.apache.geode.distributed.internal.ClusterDistributionManager;
-import org.apache.geode.distributed.internal.DirectReplyProcessor;
-import org.apache.geode.distributed.internal.DistributionMessage;
-import org.apache.geode.distributed.internal.InternalDistributedSystem;
-import org.apache.geode.distributed.internal.MessageWithReply;
-import org.apache.geode.distributed.internal.ReplyException;
-import org.apache.geode.distributed.internal.ReplyMessage;
-import org.apache.geode.distributed.internal.SerialAckedMessage;
-import org.apache.geode.distributed.internal.membership.gms.membership.GMSJoinLeave;
-import org.apache.geode.internal.DSFIDFactory;
-import org.apache.geode.internal.cache.DirectReplyMessage;
-import org.apache.geode.test.dunit.Host;
-import org.apache.geode.test.dunit.Invoke;
-import org.apache.geode.test.dunit.VM;
-import org.apache.geode.test.dunit.rules.DistributedRule;
-import org.apache.geode.test.junit.categories.BackwardCompatibilityTest;
-import org.apache.geode.test.junit.categories.MembershipTest;
-import org.apache.geode.test.junit.rules.serializable.SerializableTestName;
-import org.apache.geode.test.junit.runners.CategoryWithParameterizedRunnerFactory;
-import org.apache.geode.test.version.VersionManager;
-import org.apache.geode.util.test.TestUtil;
-
-
-/**
- * This class tests cluster tcp/ip communications both with and without SSL enabled
- */
-@Category({MembershipTest.class, BackwardCompatibilityTest.class})
-@RunWith(Parameterized.class)
-@Parameterized.UseParametersRunnerFactory(CategoryWithParameterizedRunnerFactory.class)
-public class ClusterCommunicationsDUnitTest implements java.io.Serializable {
-
-  private boolean conserveSockets;
-  private boolean useSSL;
-
-  enum RunConfiguration {
-    SHARED_CONNECTIONS(true, false),
-    SHARED_CONNECTIONS_WITH_SSL(true, true),
-    UNSHARED_CONNECTIONS(false, false),
-    UNSHARED_CONNECTIONS_WITH_SSL(false, true);
-
-    boolean useSSL;
-    boolean conserveSockets;
-
-    RunConfiguration(boolean conserveSockets, boolean useSSL) {
-      this.useSSL = useSSL;
-      this.conserveSockets = conserveSockets;
-    }
-  }
-
-  @Parameterized.Parameters(name = "{0}")
-  public static Collection<RunConfiguration> data() {
-    return Arrays.asList(RunConfiguration.values());
-  }
-
-  private static final int NUM_SERVERS = 2;
-  private static final int SMALL_BUFFER_SIZE = 8000;
-
-  private static final long serialVersionUID = -3438183140385150550L;
-
-  private static Cache cache;
-
-  @Rule
-  public DistributedRule distributedRule =
-      DistributedRule.builder().withVMCount(NUM_SERVERS + 1).build();
-
-  @Rule
-  public final SerializableTestName testName = new SerializableTestName();
-
-  final String regionName = "clusterTestRegion";
-
-  public ClusterCommunicationsDUnitTest(RunConfiguration runConfiguration) {
-    this.useSSL = runConfiguration.useSSL;
-    this.conserveSockets = runConfiguration.conserveSockets;
-  }
-
-  @Before
-  public void setUp() throws Exception {
-    final Boolean testWithSSL = useSSL;
-    final Boolean testWithConserveSocketsTrue = conserveSockets;
-    Invoke.invokeInEveryVM(() -> {
-      this.useSSL = testWithSSL;
-      this.conserveSockets = testWithConserveSocketsTrue;
-    });
-  }
-
-  @Test
-  public void createEntryAndVerifyUpdate() {
-    int locatorPort = createLocator(VM.getVM(0));
-    for (int i = 1; i <= NUM_SERVERS; i++) {
-      createCacheAndRegion(VM.getVM(i), locatorPort);
-    }
-    performCreate(VM.getVM(1));
-    for (int i = 1; i <= NUM_SERVERS; i++) {
-      verifyCreatedEntry(VM.getVM(i));
-    }
-    performUpdate(VM.getVM(1));
-    for (int i = 1; i <= NUM_SERVERS; i++) {
-      verifyUpdatedEntry(VM.getVM(i));
-    }
-  }
-
-  @Test
-  public void createEntryWithBigMessage() {
-    int locatorPort = createLocator(VM.getVM(0));
-    for (int i = 1; i <= NUM_SERVERS; i++) {
-      createCacheAndRegion(VM.getVM(i), locatorPort);
-    }
-    performCreateWithLargeValue(VM.getVM(1));
-    // fault the value into an empty cache - forces use of message chunking
-    for (int i = 1; i <= NUM_SERVERS - 1; i++) {
-      verifyCreatedEntry(VM.getVM(i));
-    }
-  }
-
-  @Test
-  public void receiveBigResponse() {
-    Invoke.invokeInEveryVM(() -> DSFIDFactory.registerDSFID(SERIAL_ACKED_MESSAGE,
-        SerialAckedMessageWithBigReply.class));
-    try {
-      int locatorPort = createLocator(VM.getVM(0));
-      for (int i = 1; i <= NUM_SERVERS; i++) {
-        createCacheAndRegion(VM.getVM(i), locatorPort);
-      }
-      final DistributedMember vm2ID =
-          VM.getVM(2).invoke(() -> cache.getDistributedSystem().getDistributedMember());
-      VM.getVM(1).invoke("receive a large direct-reply message", () -> {
-        SerialAckedMessageWithBigReply messageWithBigReply = new SerialAckedMessageWithBigReply();
-        await().until(() -> {
-          messageWithBigReply.send(Collections.<DistributedMember>singleton(vm2ID));
-          return true;
-        });
-      });
-    } finally {
-      Invoke.invokeInEveryVM(
-          () -> DSFIDFactory.registerDSFID(SERIAL_ACKED_MESSAGE, SerialAckedMessage.class));
-    }
-  }
-
-  @Test
-  public void performARollingUpgrade() {
-    List<String> testVersions = VersionManager.getInstance().getVersionsWithoutCurrent();
-    Collections.sort(testVersions);
-    String testVersion = testVersions.get(testVersions.size() - 1);
-
-    // create a cluster with the previous version of Geode
-    VM locatorVM = Host.getHost(0).getVM(testVersion, 0);
-    VM server1VM = Host.getHost(0).getVM(testVersion, 1);
-    int locatorPort = createLocator(locatorVM);
-    createCacheAndRegion(server1VM, locatorPort);
-    performCreate(VM.getVM(1));
-
-    // roll the locator to the current version
-    locatorVM.invoke("stop locator", () -> Locator.getLocator().stop());
-    locatorVM = Host.getHost(0).getVM(VersionManager.CURRENT_VERSION, 0);
-    locatorVM.invoke("roll locator to current version", () -> {
-      // if you need to debug SSL communications use this property:
-      // System.setProperty("javax.net.debug", "all");
-      Properties props = getDistributedSystemProperties();
-      // locator must restart with the same port so that it reconnects to the server
-      await().atMost(getTimeout().getValueInMS(), TimeUnit.MILLISECONDS)
-          .until(() -> Locator.startLocatorAndDS(locatorPort, new File(""), props) != null);
-      assertThat(Locator.getLocator().getDistributedSystem().getAllOtherMembers().size())
-          .isGreaterThan(0);
-    });
-
-    // start server2 with current version
-    VM server2VM = Host.getHost(0).getVM(VersionManager.CURRENT_VERSION, 2);
-    createCacheAndRegion(server2VM, locatorPort);
-
-    // roll server1 to the current version
-    server1VM.invoke("stop server1", () -> {
-      cache.close();
-    });
-    server1VM = Host.getHost(0).getVM(VersionManager.CURRENT_VERSION, 1);
-    createCacheAndRegion(server1VM, locatorPort);
-
-
-    verifyCreatedEntry(server1VM);
-    verifyCreatedEntry(server2VM);
-  }
-
-  private void createCacheAndRegion(VM memberVM, int locatorPort) {
-    memberVM.invoke("start cache and create region", () -> {
-      cache = createCache(locatorPort);
-      cache.createRegionFactory(RegionShortcut.REPLICATE).create(regionName);
-    });
-  }
-
-
-  private void performCreate(VM memberVM) {
-    memberVM.invoke("perform create", () -> cache
-        .getRegion(regionName).put("testKey", "testValue"));
-  }
-
-  private void performUpdate(VM memberVM) {
-    memberVM.invoke("perform update", () -> cache
-        .getRegion(regionName).put("testKey", "updatedTestValue"));
-  }
-
-  private void performCreateWithLargeValue(VM memberVM) {
-    memberVM.invoke("perform create", () -> {
-      byte[] value = new byte[SMALL_BUFFER_SIZE * 20];
-      Arrays.fill(value, (byte) 1);
-      cache.getRegion(regionName).put("testKey", value);
-    });
-  }
-
-  private void verifyCreatedEntry(VM memberVM) {
-    memberVM.invoke("verify entry created", () -> Assert.assertTrue(cache
-        .getRegion(regionName).containsKey("testKey")));
-  }
-
-  private void verifyUpdatedEntry(VM memberVM) {
-    memberVM.invoke("verify entry updated", () -> Assert.assertTrue(cache
-        .getRegion(regionName).containsValue("updatedTestValue")));
-  }
-
-  private int createLocator(VM memberVM) {
-    return memberVM.invoke("create locator", () -> {
-      // if you need to debug SSL communications use this property:
-      // System.setProperty("javax.net.debug", "all");
-      System.setProperty(GMSJoinLeave.BYPASS_DISCOVERY_PROPERTY, "true");
-      try {
-        return Locator.startLocatorAndDS(0, new File(""), getDistributedSystemProperties())
-            .getPort();
-      } finally {
-        System.clearProperty(GMSJoinLeave.BYPASS_DISCOVERY_PROPERTY);
-      }
-    });
-  }
-
-  private Cache createCache(int locatorPort) {
-    // if you need to debug SSL communications use this property:
-    // System.setProperty("javax.net.debug", "all");
-    Properties properties = getDistributedSystemProperties();
-    properties.put(LOCATORS, "localhost[" + locatorPort + "]");
-    return new CacheFactory(properties).create();
-  }
-
-  public Properties getDistributedSystemProperties() {
-    Properties properties = new Properties();
-    properties.put(ENABLE_CLUSTER_CONFIGURATION, "false");
-    properties.put(USE_CLUSTER_CONFIGURATION, "false");
-    properties.put(NAME, "vm" + VM.getCurrentVMNum());
-    properties.put(CONSERVE_SOCKETS, "" + conserveSockets);
-    properties.put(SOCKET_LEASE_TIME, "10000");
-    properties.put(SOCKET_BUFFER_SIZE, "" + SMALL_BUFFER_SIZE);
-
-    if (useSSL) {
-      properties.put(SSL_ENABLED_COMPONENTS, "cluster,locator");
-      properties.put(SSL_KEYSTORE, TestUtil.getResourcePath(this.getClass(), "server.keystore"));
-      properties.put(SSL_TRUSTSTORE, TestUtil.getResourcePath(this.getClass(), "server.keystore"));
-      properties.put(SSL_PROTOCOLS, "TLSv1.2");
-      properties.put(SSL_KEYSTORE_PASSWORD, "password");
-      properties.put(SSL_TRUSTSTORE_PASSWORD, "password");
-      properties.put(SSL_REQUIRE_AUTHENTICATION, "true");
-    }
-    return properties;
-  }
-
-  /**
-   * SerialAckedMessageWithBigReply requires conserve-sockets=false and acts to send
-   * a large reply message to the sender. You must have already created a cache in the
-   * sender and receiver VMs and registered this class with the DataSerializableFixedID
-   * of SERIAL_ACKED_MESSAGE. Don't forget to reset the registration to
-   * SerialAckedMessage at the end of the test.
-   */
-  public static class SerialAckedMessageWithBigReply extends DistributionMessage
-      implements MessageWithReply,
-      DirectReplyMessage {
-    static final int DSFID = SERIAL_ACKED_MESSAGE;
-
-    private int processorId;
-    private transient ClusterDistributionManager originDm;
-    private transient DirectReplyProcessor replyProcessor;
-
-    public SerialAckedMessageWithBigReply() {
-      super();
-      InternalDistributedSystem ds = InternalDistributedSystem.getAnyInstance();
-      if (ds != null) { // this constructor is used in serialization as well as when sending to
-                        // others
-        this.originDm = (ClusterDistributionManager) ds.getDistributionManager();
-      }
-    }
-
-    public void send(Set<DistributedMember> recipients)
-        throws InterruptedException, ReplyException {
-      // this message is only used by battery tests so we can log info level debug
-      // messages
-      replyProcessor = new DirectReplyProcessor(originDm, recipients);
-      processorId = replyProcessor.getProcessorId();
-      setRecipients(recipients);
-      Set failures = originDm.putOutgoing(this);
-      if (failures != null && failures.size() > 0) {
-        for (Object failure : failures) {
-          System.err.println("Unable to send serial acked message to " + failure);
-        }
-      }
-
-      replyProcessor.waitForReplies();
-    }
-
-    @Override
-    public void toData(DataOutput out) throws IOException {
-      super.toData(out);
-      out.writeInt(processorId);
-    }
-
-    @Override
-    public void fromData(DataInput in) throws IOException, ClassNotFoundException {
-      super.fromData(in);
-      processorId = in.readInt();
-    }
-
-    @Override
-    protected void process(ClusterDistributionManager dm) {
-      ReplyMessage reply = new ReplyMessage();
-      reply.setProcessorId(processorId);
-      reply.setRecipient(getSender());
-      byte[] returnValue = new byte[SMALL_BUFFER_SIZE * 6];
-      reply.setReturnValue(returnValue);
-      System.out.println("<" + Thread.currentThread().getName() +
-          "> sending reply with return value size "
-          + returnValue.length + " using " + getReplySender(dm));
-      getReplySender(dm).putOutgoing(reply);
-    }
-
-    @Override
-    public int getProcessorId() {
-      return processorId;
-    }
-
-    @Override
-    public int getProcessorType() {
-      return ClusterDistributionManager.SERIAL_EXECUTOR;
-    }
-
-    @Override
-    public int getDSFID() {
-      return DSFID;
-    }
-
-    @Override
-    public DirectReplyProcessor getDirectReplyProcessor() {
-      return replyProcessor;
-    }
-
-    @Override
-    public boolean supportsDirectAck() {
-      return processorId == 0;
-    }
-
-    @Override
-    public void registerProcessor() {
-      if (replyProcessor != null) {
-        this.processorId = this.replyProcessor.register();
-      }
-    }
-  }
-
-}
diff --git a/geode-core/src/distributedTest/java/org/apache/geode/cache/client/internal/CacheServerSSLConnectionDUnitTest.java b/geode-core/src/distributedTest/java/org/apache/geode/cache/client/internal/CacheServerSSLConnectionDUnitTest.java
index 986d04e..58ba260 100644
--- a/geode-core/src/distributedTest/java/org/apache/geode/cache/client/internal/CacheServerSSLConnectionDUnitTest.java
+++ b/geode-core/src/distributedTest/java/org/apache/geode/cache/client/internal/CacheServerSSLConnectionDUnitTest.java
@@ -23,6 +23,7 @@ import static org.apache.geode.distributed.ConfigurationProperties.CLUSTER_SSL_P
 import static org.apache.geode.distributed.ConfigurationProperties.CLUSTER_SSL_REQUIRE_AUTHENTICATION;
 import static org.apache.geode.distributed.ConfigurationProperties.CLUSTER_SSL_TRUSTSTORE;
 import static org.apache.geode.distributed.ConfigurationProperties.CLUSTER_SSL_TRUSTSTORE_PASSWORD;
+import static org.apache.geode.distributed.ConfigurationProperties.LOCATORS;
 import static org.apache.geode.distributed.ConfigurationProperties.MCAST_PORT;
 import static org.apache.geode.distributed.ConfigurationProperties.SERVER_SSL_CIPHERS;
 import static org.apache.geode.distributed.ConfigurationProperties.SERVER_SSL_ENABLED;
@@ -48,7 +49,6 @@ import static org.junit.Assert.assertNotNull;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
-import java.io.File;
 import java.io.IOException;
 import java.io.PrintWriter;
 import java.io.StringWriter;
@@ -77,7 +77,6 @@ import org.apache.geode.cache.client.ClientRegionFactory;
 import org.apache.geode.cache.client.ClientRegionShortcut;
 import org.apache.geode.cache.client.NoAvailableServersException;
 import org.apache.geode.cache.server.CacheServer;
-import org.apache.geode.distributed.Locator;
 import org.apache.geode.internal.security.SecurableCommunicationChannel;
 import org.apache.geode.security.AuthenticationRequiredException;
 import org.apache.geode.test.dunit.AsyncInvocation;
@@ -139,20 +138,13 @@ public class CacheServerSSLConnectionDUnitTest extends JUnit4DistributedTestCase
 
   @AfterClass
   public static void postClass() {
-    Invoke.invokeInEveryVM(() -> {
-      if (instance.cache != null) {
-        instance.cache.close();
-      }
-      instance = null;
-    });
-    if (instance.cache != null) {
-      instance.cache.close();
-    }
+    Invoke.invokeInEveryVM(() -> instance = null);
     instance = null;
   }
 
   public Cache createCache(Properties props) throws Exception {
     props.setProperty(MCAST_PORT, "0");
+    props.setProperty(LOCATORS, "");
     cache = new CacheFactory(props).create();
     if (cache == null) {
       throw new Exception("CacheFactory.create() returned null ");
@@ -178,21 +170,30 @@ public class CacheServerSSLConnectionDUnitTest extends JUnit4DistributedTestCase
   }
 
   @SuppressWarnings("rawtypes")
-  public void setUpServerVM(final boolean cacheServerSslenabled, int optionalLocatorPort)
-      throws Exception {
+  public void setUpServerVM(final boolean cacheServerSslenabled) throws Exception {
     System.setProperty("javax.net.debug", "ssl,handshake");
 
     Properties gemFireProps = new Properties();
-    if (optionalLocatorPort > 0) {
-      gemFireProps.put("locators", "localhost[" + optionalLocatorPort + "]");
-    }
 
     String cacheServerSslprotocols = "any";
     String cacheServerSslciphers = "any";
     boolean cacheServerSslRequireAuth = true;
     if (!useOldSSLSettings) {
-      getNewSSLSettings(gemFireProps, cacheServerSslprotocols, cacheServerSslciphers,
-          cacheServerSslRequireAuth);
+      gemFireProps.put(SSL_ENABLED_COMPONENTS,
+          SecurableCommunicationChannel.CLUSTER + "," + SecurableCommunicationChannel.SERVER);
+      gemFireProps.put(SSL_PROTOCOLS, cacheServerSslprotocols);
+      gemFireProps.put(SSL_CIPHERS, cacheServerSslciphers);
+      gemFireProps.put(SSL_REQUIRE_AUTHENTICATION, String.valueOf(cacheServerSslRequireAuth));
+
+      String keyStore =
+          TestUtil.getResourcePath(CacheServerSSLConnectionDUnitTest.class, SERVER_KEY_STORE);
+      String trustStore =
+          TestUtil.getResourcePath(CacheServerSSLConnectionDUnitTest.class, SERVER_TRUST_STORE);
+      gemFireProps.put(SSL_KEYSTORE_TYPE, "jks");
+      gemFireProps.put(SSL_KEYSTORE, keyStore);
+      gemFireProps.put(SSL_KEYSTORE_PASSWORD, "password");
+      gemFireProps.put(SSL_TRUSTSTORE, trustStore);
+      gemFireProps.put(SSL_TRUSTSTORE_PASSWORD, "password");
     } else {
       gemFireProps.put(CLUSTER_SSL_ENABLED, String.valueOf(cacheServerSslenabled));
       gemFireProps.put(CLUSTER_SSL_PROTOCOLS, cacheServerSslprotocols);
@@ -221,25 +222,6 @@ public class CacheServerSSLConnectionDUnitTest extends JUnit4DistributedTestCase
     r.put("serverkey", "servervalue");
   }
 
-  private void getNewSSLSettings(Properties gemFireProps, String cacheServerSslprotocols,
-      String cacheServerSslciphers, boolean cacheServerSslRequireAuth) {
-    gemFireProps.put(SSL_ENABLED_COMPONENTS,
-        SecurableCommunicationChannel.CLUSTER + "," + SecurableCommunicationChannel.SERVER);
-    gemFireProps.put(SSL_PROTOCOLS, cacheServerSslprotocols);
-    gemFireProps.put(SSL_CIPHERS, cacheServerSslciphers);
-    gemFireProps.put(SSL_REQUIRE_AUTHENTICATION, String.valueOf(cacheServerSslRequireAuth));
-
-    String keyStore =
-        TestUtil.getResourcePath(CacheServerSSLConnectionDUnitTest.class, SERVER_KEY_STORE);
-    String trustStore =
-        TestUtil.getResourcePath(CacheServerSSLConnectionDUnitTest.class, SERVER_TRUST_STORE);
-    gemFireProps.put(SSL_KEYSTORE_TYPE, "jks");
-    gemFireProps.put(SSL_KEYSTORE, keyStore);
-    gemFireProps.put(SSL_KEYSTORE_PASSWORD, "password");
-    gemFireProps.put(SSL_TRUSTSTORE, trustStore);
-    gemFireProps.put(SSL_TRUSTSTORE_PASSWORD, "password");
-  }
-
   public void setUpClientVM(String host, int port, boolean cacheServerSslenabled,
       boolean cacheServerSslRequireAuth, String keyStore, String trustStore, boolean subscription,
       boolean clientHasTrustedKeystore) {
@@ -304,7 +286,6 @@ public class CacheServerSSLConnectionDUnitTest extends JUnit4DistributedTestCase
 
     ClientCacheFactory clientCacheFactory = new ClientCacheFactory(gemFireProps);
     clientCacheFactory.setPoolSubscriptionEnabled(subscription).addPoolServer(host, port);
-    clientCacheFactory.setPoolRetryAttempts(5);
     clientCache = clientCacheFactory.create();
 
     ClientRegionFactory<String, String> regionFactory =
@@ -327,9 +308,8 @@ public class CacheServerSSLConnectionDUnitTest extends JUnit4DistributedTestCase
   }
 
 
-  public static void setUpServerVMTask(boolean cacheServerSslenabled, int optionalLocatorPort)
-      throws Exception {
-    instance.setUpServerVM(cacheServerSslenabled, optionalLocatorPort);
+  public static void setUpServerVMTask(boolean cacheServerSslenabled) throws Exception {
+    instance.setUpServerVM(cacheServerSslenabled);
   }
 
   public static int createServerTask() throws Exception {
@@ -391,35 +371,20 @@ public class CacheServerSSLConnectionDUnitTest extends JUnit4DistributedTestCase
     final Host host = Host.getHost(0);
     VM serverVM = host.getVM(1);
     VM clientVM = host.getVM(2);
-    VM serverVM2 = host.getVM(3);
 
     boolean cacheServerSslenabled = true;
     boolean cacheClientSslenabled = true;
     boolean cacheClientSslRequireAuth = true;
 
-    Properties locatorProps = new Properties();
-    String cacheServerSslprotocols = "any";
-    String cacheServerSslciphers = "any";
-    boolean cacheServerSslRequireAuth = true;
-    getNewSSLSettings(locatorProps, cacheServerSslprotocols, cacheServerSslciphers,
-        cacheServerSslRequireAuth);
-    Locator locator = Locator.startLocatorAndDS(0, new File(""), locatorProps);
-    int locatorPort = locator.getPort();
-    try {
-      serverVM.invoke(() -> setUpServerVMTask(cacheServerSslenabled, locatorPort));
-      int port = serverVM.invoke(() -> createServerTask());
-      serverVM2.invoke(() -> setUpServerVMTask(cacheServerSslenabled, locatorPort));
-      serverVM2.invoke(() -> createServerTask());
+    serverVM.invoke(() -> setUpServerVMTask(cacheServerSslenabled));
+    int port = serverVM.invoke(() -> createServerTask());
 
-      String hostName = host.getHostName();
+    String hostName = host.getHostName();
 
-      clientVM.invoke(() -> setUpClientVMTask(hostName, port, cacheClientSslenabled,
-          cacheClientSslRequireAuth, CLIENT_KEY_STORE, CLIENT_TRUST_STORE, true));
-      clientVM.invoke(() -> doClientRegionTestTask());
-      serverVM.invoke(() -> doServerRegionTestTask());
-    } finally {
-      locator.stop();
-    }
+    clientVM.invoke(() -> setUpClientVMTask(hostName, port, cacheClientSslenabled,
+        cacheClientSslRequireAuth, CLIENT_KEY_STORE, CLIENT_TRUST_STORE, true));
+    clientVM.invoke(() -> doClientRegionTestTask());
+    serverVM.invoke(() -> doServerRegionTestTask());
   }
 
   /**
@@ -448,7 +413,7 @@ public class CacheServerSSLConnectionDUnitTest extends JUnit4DistributedTestCase
     boolean cacheClientSslenabled = true;
     boolean cacheClientSslRequireAuth = true;
 
-    serverVM.invoke(() -> setUpServerVMTask(cacheServerSslenabled, 0));
+    serverVM.invoke(() -> setUpServerVMTask(cacheServerSslenabled));
     int port = serverVM.invoke(() -> createServerTask());
 
     String hostName = host.getHostName();
@@ -499,7 +464,7 @@ public class CacheServerSSLConnectionDUnitTest extends JUnit4DistributedTestCase
     boolean cacheClientSslenabled = false;
     boolean cacheClientSslRequireAuth = true;
 
-    serverVM.invoke(() -> setUpServerVMTask(cacheServerSslenabled, 0));
+    serverVM.invoke(() -> setUpServerVMTask(cacheServerSslenabled));
     serverVM.invoke(() -> createServerTask());
 
     Object array[] = (Object[]) serverVM.invoke(() -> getCacheServerEndPointTask());
@@ -546,7 +511,7 @@ public class CacheServerSSLConnectionDUnitTest extends JUnit4DistributedTestCase
     IgnoredException.addIgnoredException("SSLHandshakeException");
     IgnoredException.addIgnoredException("ValidatorException");
 
-    serverVM.invoke(() -> setUpServerVMTask(cacheServerSslenabled, 0));
+    serverVM.invoke(() -> setUpServerVMTask(cacheServerSslenabled));
     serverVM.invoke(() -> createServerTask());
 
     Object array[] = (Object[]) serverVM.invoke(() -> getCacheServerEndPointTask());
@@ -569,7 +534,7 @@ public class CacheServerSSLConnectionDUnitTest extends JUnit4DistributedTestCase
     boolean cacheClientSslenabled = true;
     boolean cacheClientSslRequireAuth = false;
 
-    serverVM.invoke(() -> setUpServerVMTask(cacheServerSslenabled, 0));
+    serverVM.invoke(() -> setUpServerVMTask(cacheServerSslenabled));
     serverVM.invoke(() -> createServerTask());
 
     Object array[] = (Object[]) serverVM.invoke(() -> getCacheServerEndPointTask());
@@ -602,7 +567,7 @@ public class CacheServerSSLConnectionDUnitTest extends JUnit4DistributedTestCase
     boolean cacheClientSslenabled = true;
     boolean cacheClientSslRequireAuth = true;
 
-    serverVM.invoke(() -> setUpServerVMTask(cacheServerSslenabled, 0));
+    serverVM.invoke(() -> setUpServerVMTask(cacheServerSslenabled));
     serverVM.invoke(() -> createServerTask());
 
     Object array[] = (Object[]) serverVM.invoke(() -> getCacheServerEndPointTask());
diff --git a/geode-core/src/distributedTest/java/org/apache/geode/cache/client/internal/ClientServerHostNameVerificationDistributedTest.java b/geode-core/src/distributedTest/java/org/apache/geode/cache/client/internal/ClientServerHostNameVerificationDistributedTest.java
index 5565360..eda8d9c 100644
--- a/geode-core/src/distributedTest/java/org/apache/geode/cache/client/internal/ClientServerHostNameVerificationDistributedTest.java
+++ b/geode-core/src/distributedTest/java/org/apache/geode/cache/client/internal/ClientServerHostNameVerificationDistributedTest.java
@@ -89,7 +89,6 @@ public class ClientServerHostNameVerificationDistributedTest {
 
   @Test
   public void expectConnectionFailureWhenNoHostNameInLocatorKey() throws Exception {
-
     CertificateBuilder locatorCertificate = new CertificateBuilder()
         .commonName("locator");
 
@@ -106,7 +105,6 @@ public class ClientServerHostNameVerificationDistributedTest {
 
   @Test
   public void expectConnectionFailureWhenWrongHostNameInLocatorKey() throws Exception {
-
     CertificateBuilder locatorCertificate = new CertificateBuilder()
         .commonName("locator")
         .sanDnsName("example.com");;
@@ -201,13 +199,10 @@ public class ClientServerHostNameVerificationDistributedTest {
       ClientRegionFactory<String, String> regionFactory =
           clientCache.createClientRegionFactory(ClientRegionShortcut.PROXY);
 
-      IgnoredException.addIgnoredException("Connection reset");
-      IgnoredException.addIgnoredException("java.io.IOException");
       if (expectedExceptionOnClient != null) {
         IgnoredException.addIgnoredException("javax.net.ssl.SSLHandshakeException");
         IgnoredException.addIgnoredException("java.net.SocketException");
         IgnoredException.addIgnoredException("java.security.cert.CertificateException");
-        IgnoredException.addIgnoredException("java.net.ssl.SSLProtocolException");
 
         Region<String, String> clientRegion = regionFactory.create("region");
         assertThatExceptionOfType(expectedExceptionOnClient)
diff --git a/geode-core/src/distributedTest/java/org/apache/geode/distributed/internal/ValueToDataThrowsRuntimeExceptionRegressionTest.java b/geode-core/src/distributedTest/java/org/apache/geode/distributed/internal/ValueToDataThrowsRuntimeExceptionRegressionTest.java
index 7e469b9..e2166b0 100644
--- a/geode-core/src/distributedTest/java/org/apache/geode/distributed/internal/ValueToDataThrowsRuntimeExceptionRegressionTest.java
+++ b/geode-core/src/distributedTest/java/org/apache/geode/distributed/internal/ValueToDataThrowsRuntimeExceptionRegressionTest.java
@@ -97,10 +97,12 @@ public class ValueToDataThrowsRuntimeExceptionRegressionTest extends JUnit4Cache
       Invoke.invokeInEveryVM(new SerializableCallable() {
         @Override
         public Object call() throws Exception {
+          System.getProperties().remove("p2p.oldIO");
           System.getProperties().remove("p2p.nodirectBuffers");
           return null;
         }
       });
+      System.getProperties().remove("p2p.oldIO");
       System.getProperties().remove("p2p.nodirectBuffers");
     }
   }
@@ -108,6 +110,7 @@ public class ValueToDataThrowsRuntimeExceptionRegressionTest extends JUnit4Cache
   @Override
   public Properties getDistributedSystemProperties() {
     Properties props = new Properties();
+    System.setProperty("p2p.oldIO", "true");
     props.setProperty(CONSERVE_SOCKETS, "true");
     // props.setProperty(DistributionConfig.ConfigurationProperties.MCAST_PORT, "12333");
     // props.setProperty(DistributionConfig.DISABLE_TCP_NAME, "true");
diff --git a/geode-core/src/distributedTest/java/org/apache/geode/internal/cache/ConcurrentMapOpsDUnitTest.java b/geode-core/src/distributedTest/java/org/apache/geode/internal/cache/ConcurrentMapOpsDUnitTest.java
index 0ca774f..f49e5a9 100644
--- a/geode-core/src/distributedTest/java/org/apache/geode/internal/cache/ConcurrentMapOpsDUnitTest.java
+++ b/geode-core/src/distributedTest/java/org/apache/geode/internal/cache/ConcurrentMapOpsDUnitTest.java
@@ -810,6 +810,7 @@ public class ConcurrentMapOpsDUnitTest extends JUnit4CacheTestCase {
         getCache().getLogger().fine("SWAP:doingRemove");
         assertTrue(r.remove("key0", "value"));
 
+        getCache().getLogger().fine("Bruce:doingExtraRemoves.  Bug #47010");
         DestroyOp.TEST_HOOK_ENTRY_NOT_FOUND = false;
         assertTrue(r.remove("key0") == null);
         assertTrue(DestroyOp.TEST_HOOK_ENTRY_NOT_FOUND);
diff --git a/geode-core/src/integrationTest/java/org/apache/geode/internal/net/SSLSocketIntegrationTest.java b/geode-core/src/integrationTest/java/org/apache/geode/internal/net/SSLSocketIntegrationTest.java
index 8e27671..32640d9 100755
--- a/geode-core/src/integrationTest/java/org/apache/geode/internal/net/SSLSocketIntegrationTest.java
+++ b/geode-core/src/integrationTest/java/org/apache/geode/internal/net/SSLSocketIntegrationTest.java
@@ -22,34 +22,24 @@ import static org.apache.geode.distributed.ConfigurationProperties.MCAST_PORT;
 import static org.apache.geode.internal.security.SecurableCommunicationChannel.CLUSTER;
 import static org.apache.geode.test.awaitility.GeodeAwaitility.await;
 import static org.assertj.core.api.Assertions.assertThat;
-import static org.assertj.core.api.Assertions.assertThatThrownBy;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNotEquals;
 import static org.junit.Assert.assertNotNull;
 import static org.junit.Assert.assertNull;
-import static org.mockito.Mockito.mock;
 
-import java.io.DataInputStream;
-import java.io.DataOutputStream;
 import java.io.File;
 import java.io.IOException;
 import java.io.ObjectInputStream;
 import java.io.ObjectOutputStream;
-import java.net.ConnectException;
 import java.net.InetAddress;
 import java.net.InetSocketAddress;
 import java.net.ServerSocket;
 import java.net.Socket;
-import java.net.SocketException;
 import java.net.SocketTimeoutException;
 import java.net.URL;
-import java.nio.ByteBuffer;
-import java.nio.channels.ServerSocketChannel;
-import java.nio.channels.SocketChannel;
 import java.util.Properties;
 import java.util.concurrent.Semaphore;
-import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicReference;
 
 import javax.net.ssl.SSLContext;
@@ -66,10 +56,8 @@ import org.junit.rules.ErrorCollector;
 import org.junit.rules.TemporaryFolder;
 import org.junit.rules.TestName;
 
-import org.apache.geode.distributed.internal.DMStats;
 import org.apache.geode.distributed.internal.DistributionConfig;
 import org.apache.geode.distributed.internal.DistributionConfigImpl;
-import org.apache.geode.internal.ByteBufferOutputStream;
 import org.apache.geode.internal.security.SecurableCommunicationChannel;
 import org.apache.geode.internal.tcp.ByteBufferInputStream;
 import org.apache.geode.test.dunit.IgnoredException;
@@ -121,8 +109,6 @@ public class SSLSocketIntegrationTest {
     System.setProperty("javax.net.ssl.trustStorePassword", "password");
     System.setProperty("javax.net.ssl.keyStore", keystore.getCanonicalPath());
     System.setProperty("javax.net.ssl.keyStorePassword", "password");
-    // System.setProperty("javax.net.debug", "ssl,handshake");
-
 
     Properties properties = new Properties();
     properties.setProperty(MCAST_PORT, "0");
@@ -193,131 +179,6 @@ public class SSLSocketIntegrationTest {
     assertThat(this.messageFromClient.get()).isEqualTo(MESSAGE);
   }
 
-  @Test
-  public void testSecuredSocketTransmissionShouldWorkUsingNIO() throws Exception {
-    ServerSocketChannel serverChannel = ServerSocketChannel.open();
-    serverSocket = serverChannel.socket();
-
-    InetSocketAddress addr = new InetSocketAddress(localHost, 0);
-    serverSocket.bind(addr, 10);
-    int serverPort = this.serverSocket.getLocalPort();
-
-    SocketCreator clusterSocketCreator =
-        SocketCreatorFactory.getSocketCreatorForComponent(SecurableCommunicationChannel.CLUSTER);
-    this.serverThread = startServerNIO(serverSocket, 15000);
-
-    await().until(() -> serverThread.isAlive());
-
-    SocketChannel clientChannel = SocketChannel.open();
-    await().until(
-        () -> clientChannel.connect(new InetSocketAddress(localHost, serverPort)));
-
-    clientSocket = clientChannel.socket();
-    NioSslEngine engine =
-        clusterSocketCreator.handshakeSSLSocketChannel(clientSocket.getChannel(),
-            clusterSocketCreator.createSSLEngine("localhost", 1234), 0, true,
-            ByteBuffer.allocate(65535), mock(DMStats.class));
-    clientChannel.configureBlocking(true);
-
-    // transmit expected string from Client to Server
-    writeMessageToNIOSSLServer(clientChannel, engine);
-    writeMessageToNIOSSLServer(clientChannel, engine);
-    writeMessageToNIOSSLServer(clientChannel, engine);
-    // this is the real assertion of this test
-    await().until(() -> {
-      return !serverThread.isAlive();
-    });
-    assertNull(serverException);
-    // assertThat(this.messageFromClient.get()).isEqualTo(MESSAGE);
-  }
-
-  private void writeMessageToNIOSSLServer(SocketChannel clientChannel, NioSslEngine engine)
-      throws IOException {
-    System.out.println("client sending Hello World message to server");
-    ByteBufferOutputStream bbos = new ByteBufferOutputStream(5000);
-    DataOutputStream dos = new DataOutputStream(bbos);
-    dos.writeUTF("Hello world");
-    dos.flush();
-    bbos.flush();
-    ByteBuffer buffer = bbos.getContentBuffer();
-    System.out.println(
-        "client buffer position is " + buffer.position() + " and limit is " + buffer.limit());
-    ByteBuffer wrappedBuffer = engine.wrap(buffer);
-    System.out.println("client wrapped buffer position is " + wrappedBuffer.position()
-        + " and limit is " + wrappedBuffer.limit());
-    int bytesWritten = clientChannel.write(wrappedBuffer);
-    System.out.println("client bytes written is " + bytesWritten);
-  }
-
-  private Thread startServerNIO(final ServerSocket serverSocket, int timeoutMillis)
-      throws Exception {
-    Thread serverThread = new Thread(new MyThreadGroup(this.testName.getMethodName()), () -> {
-      NioSslEngine engine = null;
-      Socket socket = null;
-      try {
-        ByteBuffer buffer = ByteBuffer.allocate(65535);
-
-        socket = serverSocket.accept();
-        SocketCreator sc = SocketCreatorFactory.getSocketCreatorForComponent(CLUSTER);
-        engine =
-            sc.handshakeSSLSocketChannel(socket.getChannel(), sc.createSSLEngine("localhost", 1234),
-                timeoutMillis,
-                false,
-                ByteBuffer.allocate(500),
-                mock(DMStats.class));
-
-        readMessageFromNIOSSLClient(socket, buffer, engine);
-        readMessageFromNIOSSLClient(socket, buffer, engine);
-        readMessageFromNIOSSLClient(socket, buffer, engine);
-      } catch (Throwable throwable) {
-        throwable.printStackTrace(System.out);
-        serverException = throwable;
-      } finally {
-        if (engine != null && socket != null) {
-          final NioSslEngine nioSslEngine = engine;
-          engine.close(socket.getChannel());
-          assertThatThrownBy(() -> {
-            nioSslEngine.unwrap(ByteBuffer.wrap(new byte[0]));
-          })
-              .isInstanceOf(IllegalStateException.class);
-        }
-      }
-    }, this.testName.getMethodName() + "-server");
-
-    serverThread.start();
-    return serverThread;
-  }
-
-  private void readMessageFromNIOSSLClient(Socket socket, ByteBuffer buffer, NioSslEngine engine)
-      throws IOException {
-
-    ByteBuffer unwrapped = engine.getUnwrappedBuffer(buffer);
-    // if we already have unencrypted data skip unwrapping
-    if (unwrapped.position() == 0) {
-      int bytesRead;
-      // if we already have encrypted data skip reading from the socket
-      if (buffer.position() == 0) {
-        bytesRead = socket.getChannel().read(buffer);
-        buffer.flip();
-      } else {
-        bytesRead = buffer.remaining();
-      }
-      System.out.println("server bytes read is " + bytesRead + ": buffer position is "
-          + buffer.position() + " and limit is " + buffer.limit());
-      unwrapped = engine.unwrap(buffer);
-      unwrapped.flip();
-      System.out.println("server unwrapped buffer position is " + unwrapped.position()
-          + " and limit is " + unwrapped.limit());
-    }
-    ByteBufferInputStream bbis = new ByteBufferInputStream(unwrapped);
-    DataInputStream dis = new DataInputStream(bbis);
-    String welcome = dis.readUTF();
-    engine.doneReading(unwrapped);
-    assertThat(welcome).isEqualTo("Hello world");
-    System.out.println("server read Hello World message from client");
-  }
-
-
   @Test(expected = SocketTimeoutException.class)
   public void handshakeCanTimeoutOnServer() throws Throwable {
     this.serverSocket = this.socketCreator.createServerSocket(0, 0, this.localHost);
@@ -335,33 +196,6 @@ public class SSLSocketIntegrationTest {
     throw serverException;
   }
 
-  @Test(expected = SocketTimeoutException.class)
-  public void handshakeWithPeerCanTimeout() throws Throwable {
-    ServerSocketChannel serverChannel = ServerSocketChannel.open();
-    serverSocket = serverChannel.socket();
-
-    InetSocketAddress addr = new InetSocketAddress(localHost, 0);
-    serverSocket.bind(addr, 10);
-    int serverPort = this.serverSocket.getLocalPort();
-
-    this.serverThread = startServerNIO(this.serverSocket, 1000);
-
-    Socket socket = new Socket();
-    await().atMost(5, TimeUnit.MINUTES).until(() -> {
-      try {
-        socket.connect(new InetSocketAddress(localHost, serverPort));
-      } catch (ConnectException e) {
-        return false;
-      } catch (SocketException e) {
-        return true; // server socket was closed
-      }
-      return true;
-    });
-    await().untilAsserted(() -> assertFalse(serverThread.isAlive()));
-    assertNotNull(serverException);
-    throw serverException;
-  }
-
   @Test
   public void configureClientSSLSocketCanTimeOut() throws Exception {
     final Semaphore serverCoordination = new Semaphore(0);
@@ -451,6 +285,7 @@ public class SSLSocketIntegrationTest {
 
   private Thread startServer(final ServerSocket serverSocket, int timeoutMillis) throws Exception {
     Thread serverThread = new Thread(new MyThreadGroup(this.testName.getMethodName()), () -> {
+      long startTime = System.currentTimeMillis();
       try {
         Socket socket = serverSocket.accept();
         SocketCreatorFactory.getSocketCreatorForComponent(CLUSTER).handshakeIfSocketIsSSL(socket,
diff --git a/geode-core/src/main/java/org/apache/geode/distributed/internal/DistributionStats.java b/geode-core/src/main/java/org/apache/geode/distributed/internal/DistributionStats.java
index 91c47e2..845c955 100644
--- a/geode-core/src/main/java/org/apache/geode/distributed/internal/DistributionStats.java
+++ b/geode-core/src/main/java/org/apache/geode/distributed/internal/DistributionStats.java
@@ -25,8 +25,8 @@ import org.apache.geode.annotations.Immutable;
 import org.apache.geode.annotations.internal.MakeNotStatic;
 import org.apache.geode.internal.NanoTimer;
 import org.apache.geode.internal.logging.LogService;
-import org.apache.geode.internal.net.Buffers;
 import org.apache.geode.internal.statistics.StatisticsTypeFactoryImpl;
+import org.apache.geode.internal.tcp.Buffers;
 import org.apache.geode.internal.util.Breadcrumbs;
 
 /**
diff --git a/geode-core/src/main/java/org/apache/geode/distributed/internal/direct/DirectChannel.java b/geode-core/src/main/java/org/apache/geode/distributed/internal/direct/DirectChannel.java
index 7d6d046..dbb4068 100644
--- a/geode-core/src/main/java/org/apache/geode/distributed/internal/direct/DirectChannel.java
+++ b/geode-core/src/main/java/org/apache/geode/distributed/internal/direct/DirectChannel.java
@@ -101,6 +101,15 @@ public class DirectChannel {
   }
 
   /**
+   * when the initial number of members is known, this method is invoked to ensure that connections
+   * to those members can be established in a reasonable amount of time. See bug 39848
+   *
+   */
+  public void setMembershipSize(int numberOfMembers) {
+    conduit.setMaximumHandshakePoolSize(numberOfMembers);
+  }
+
+  /**
    * Returns the cancel criterion for the channel, which will note if the channel is abnormally
    * closing
    */
@@ -471,9 +480,20 @@ public class DirectChannel {
       if (con.isSharedResource()) {
         continue;
       }
+      int msToWait = (int) (ackTimeout - (System.currentTimeMillis() - startTime));
+      // if the wait threshold has already been reached during transmission
+      // of the message, set a small wait period just to make sure the
+      // acks haven't already come back
+      if (msToWait <= 0) {
+        msToWait = 10;
+      }
+      long msInterval = ackSDTimeout;
+      if (msInterval <= 0) {
+        msInterval = Math.max(ackTimeout, 1000);
+      }
       try {
         try {
-          con.readAck(processor);
+          con.readAck(msToWait, msInterval, processor);
         } catch (SocketTimeoutException ex) {
           handleAckTimeout(ackTimeout, ackSDTimeout, con, processor);
         }
@@ -668,7 +688,7 @@ public class DirectChannel {
       // wait for ack-severe-alert-threshold period first, then wait forever
       if (ackSATimeout > 0) {
         try {
-          c.readAck(processor);
+          c.readAck((int) ackSATimeout, ackSATimeout, processor);
           return;
         } catch (SocketTimeoutException e) {
           Object[] args = new Object[] {Long.valueOf((ackSATimeout + ackTimeout) / 1000),
@@ -679,7 +699,7 @@ public class DirectChannel {
         }
       }
       try {
-        c.readAck(processor);
+        c.readAck(0, 0, processor);
       } catch (SocketTimeoutException ex) {
         // this can never happen when called with timeout of 0
         logger.error(String.format("Unexpected timeout while waiting for ack from %s",
diff --git a/geode-core/src/main/java/org/apache/geode/distributed/internal/membership/gms/membership/GMSJoinLeave.java b/geode-core/src/main/java/org/apache/geode/distributed/internal/membership/gms/membership/GMSJoinLeave.java
index e529c9e..7e4143d 100644
--- a/geode-core/src/main/java/org/apache/geode/distributed/internal/membership/gms/membership/GMSJoinLeave.java
+++ b/geode-core/src/main/java/org/apache/geode/distributed/internal/membership/gms/membership/GMSJoinLeave.java
@@ -2785,7 +2785,7 @@ public class GMSJoinLeave implements JoinLeave, MessageHandler {
       logger.debug("checking availability of these members: {}", checkers);
       ExecutorService svc =
           LoggingExecutors.newFixedThreadPool("Geode View Creator verification thread ",
-              true, suspects.size());
+              false, suspects.size());
       try {
         long giveUpTime = System.currentTimeMillis() + viewAckTimeout;
         // submit the tasks that will remove dead members from the suspects collection
diff --git a/geode-core/src/main/java/org/apache/geode/distributed/internal/membership/gms/mgr/GMSMembershipManager.java b/geode-core/src/main/java/org/apache/geode/distributed/internal/membership/gms/mgr/GMSMembershipManager.java
index 592c749..e0b554e 100644
--- a/geode-core/src/main/java/org/apache/geode/distributed/internal/membership/gms/mgr/GMSMembershipManager.java
+++ b/geode-core/src/main/java/org/apache/geode/distributed/internal/membership/gms/mgr/GMSMembershipManager.java
@@ -656,6 +656,8 @@ public class GMSMembershipManager implements MembershipManager, Manager {
         this.isJoining = true; // added for bug #44373
 
         // connect
+        long start = System.currentTimeMillis();
+
         boolean ok = services.getJoinLeave().join();
 
         if (!ok) {
@@ -663,6 +665,11 @@ public class GMSMembershipManager implements MembershipManager, Manager {
               + "Operation either timed out, was stopped or Locator does not exist.");
         }
 
+        long delta = System.currentTimeMillis() - start;
+
+        logger.info(LogMarker.DISTRIBUTION_MARKER, "Joined the distributed system (took  {}  ms)",
+            delta);
+
         NetView initialView = services.getJoinLeave().getView();
         latestView = new NetView(initialView, initialView.getViewId());
         listener.viewInstalled(latestView);
@@ -2527,6 +2534,9 @@ public class GMSMembershipManager implements MembershipManager, Manager {
   @Override
   public void installView(NetView v) {
     if (latestViewId < 0 && !isConnected()) {
+      if (this.directChannel != null) {
+        this.directChannel.setMembershipSize(v.getMembers().size());
+      }
       latestViewId = v.getViewId();
       latestView = v;
       logger.debug("MembershipManager: initial view is {}", latestView);
diff --git a/geode-core/src/main/java/org/apache/geode/distributed/internal/tcpserver/TcpClient.java b/geode-core/src/main/java/org/apache/geode/distributed/internal/tcpserver/TcpClient.java
index b3453b3..8961a6e 100644
--- a/geode-core/src/main/java/org/apache/geode/distributed/internal/tcpserver/TcpClient.java
+++ b/geode-core/src/main/java/org/apache/geode/distributed/internal/tcpserver/TcpClient.java
@@ -328,10 +328,6 @@ public class TcpClient {
     } finally {
       try {
         sock.setSoLinger(true, 0); // initiate an abort on close to shut down the server's socket
-      } catch (Exception e) {
-        logger.error("Error aborting socket ", e);
-      }
-      try {
         sock.close();
       } catch (Exception e) {
         logger.error("Error closing socket ", e);
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/EntryEventImpl.java b/geode-core/src/main/java/org/apache/geode/internal/cache/EntryEventImpl.java
index 7b84506..19efbce 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/EntryEventImpl.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/EntryEventImpl.java
@@ -2123,23 +2123,21 @@ public class EntryEventImpl implements InternalEntryEvent, InternalCacheEvent,
     buf.append(getRegion().getFullPath());
     buf.append(";key=");
     buf.append(this.getKey());
-    if (Boolean.getBoolean("gemfire.insecure-logvalues")) {
-      buf.append(";oldValue=");
-      try {
-        synchronized (this.offHeapLock) {
-          ArrayUtils.objectStringNonRecursive(basicGetOldValue(), buf);
-        }
-      } catch (IllegalStateException ignore) {
-        buf.append("OFFHEAP_VALUE_FREED");
+    buf.append(";oldValue=");
+    try {
+      synchronized (this.offHeapLock) {
+        ArrayUtils.objectStringNonRecursive(basicGetOldValue(), buf);
       }
-      buf.append(";newValue=");
-      try {
-        synchronized (this.offHeapLock) {
-          ArrayUtils.objectStringNonRecursive(basicGetNewValue(), buf);
-        }
-      } catch (IllegalStateException ignore) {
-        buf.append("OFFHEAP_VALUE_FREED");
+    } catch (IllegalStateException ignore) {
+      buf.append("OFFHEAP_VALUE_FREED");
+    }
+    buf.append(";newValue=");
+    try {
+      synchronized (this.offHeapLock) {
+        ArrayUtils.objectStringNonRecursive(basicGetNewValue(), buf);
       }
+    } catch (IllegalStateException ignore) {
+      buf.append("OFFHEAP_VALUE_FREED");
     }
     buf.append(";callbackArg=");
     buf.append(this.getRawCallbackArgument());
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/PRQueryProcessor.java b/geode-core/src/main/java/org/apache/geode/internal/cache/PRQueryProcessor.java
index 4182a38..92a8316 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/PRQueryProcessor.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/PRQueryProcessor.java
@@ -332,7 +332,7 @@ public class PRQueryProcessor {
     static synchronized void initializeExecutorService() {
       if (execService == null || execService.isShutdown() || execService.isTerminated()) {
         int numThreads = (TEST_NUM_THREADS > 1 ? TEST_NUM_THREADS : NUM_THREADS);
-        execService = LoggingExecutors.newFixedThreadPool("PRQueryProcessor", true, numThreads);
+        execService = LoggingExecutors.newFixedThreadPool("PRQueryProcessor", false, numThreads);
       }
     }
   }
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/properties.html b/geode-core/src/main/java/org/apache/geode/internal/cache/properties.html
index 6bbc3a5..18b3478 100755
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/properties.html
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/properties.html
@@ -2919,6 +2919,17 @@ See org.apache.geode.internal.tcp.TCPConduit#startAcceptor(int, int, InetAddress
 TBA 
 </dd>
 
+<!-- -------------------------------------------------------  -->
+<dt><strong>p2p.useSSL</strong></dt>
+<dd>
+<em>Public:</em> false
+<p>
+<em>Boolean</em> (default is false)
+<p>
+See org.apache.geode.internal.tcp.TCPConduit#useSSL.
+<p>
+TBA 
+</dd>
 
 <!-- -------------------------------------------------------  -->
 <dt><strong>query.disableIndexes</strong></dt>
diff --git a/geode-core/src/main/java/org/apache/geode/internal/monitoring/ThreadsMonitoringProcess.java b/geode-core/src/main/java/org/apache/geode/internal/monitoring/ThreadsMonitoringProcess.java
index d6b3344..ebd4ce1 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/monitoring/ThreadsMonitoringProcess.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/monitoring/ThreadsMonitoringProcess.java
@@ -60,8 +60,7 @@ public class ThreadsMonitoringProcess extends TimerTask {
       if (delta >= this.timeLimit) {
         isStuck = true;
         numOfStuck++;
-        logger.warn("Thread {} (0x{}) is stuck", entry1.getKey(),
-            Long.toHexString(entry1.getKey()));
+        logger.warn("Thread <{}> is stuck", entry1.getKey());
         entry1.getValue().handleExpiry(delta);
       }
     }
diff --git a/geode-core/src/main/java/org/apache/geode/internal/monitoring/executor/AbstractExecutor.java b/geode-core/src/main/java/org/apache/geode/internal/monitoring/executor/AbstractExecutor.java
index 8f98926..6864e29 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/monitoring/executor/AbstractExecutor.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/monitoring/executor/AbstractExecutor.java
@@ -60,15 +60,14 @@ public abstract class AbstractExecutor {
 
     StringBuilder strb = new StringBuilder();
 
-    strb.append("Thread <").append(this.threadID).append("> (0x")
-        .append(Long.toHexString(this.threadID)).append(") that was executed at <")
+    strb.append("Thread <").append(this.threadID).append("> that was executed at <")
         .append(dateFormat.format(this.getStartTime())).append("> has been stuck for <")
         .append((float) stuckTime / 1000)
         .append(" seconds> and number of thread monitor iteration <")
         .append(this.numIterationsStuck).append("> ").append(System.lineSeparator());
     if (logThreadDetails) {
       strb.append("Thread Name <").append(thread.getThreadName()).append(">")
-          .append(" state <").append(thread.getThreadState())
+          .append(System.lineSeparator()).append("Thread state <").append(thread.getThreadState())
           .append(">").append(System.lineSeparator());
 
       if (thread.getLockName() != null)
@@ -76,7 +75,7 @@ public abstract class AbstractExecutor {
             .append(System.lineSeparator());
 
       if (thread.getLockOwnerName() != null)
-        strb.append("Owned By <").append(thread.getLockOwnerName()).append("> with ID <")
+        strb.append("Owned By <").append(thread.getLockOwnerName()).append("> and ID <")
             .append(thread.getLockOwnerId()).append(">").append(System.lineSeparator());
     }
 
diff --git a/geode-core/src/main/java/org/apache/geode/internal/net/NioFilter.java b/geode-core/src/main/java/org/apache/geode/internal/net/NioFilter.java
deleted file mode 100644
index 6cb40ec..0000000
--- a/geode-core/src/main/java/org/apache/geode/internal/net/NioFilter.java
+++ /dev/null
@@ -1,87 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
- * agreements. See the NOTICE file distributed with this work for additional information regarding
- * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance with the License. You may obtain a
- * copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software distributed under the License
- * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
- * or implied. See the License for the specific language governing permissions and limitations under
- * the License.
- */
-package org.apache.geode.internal.net;
-
-import java.io.IOException;
-import java.nio.ByteBuffer;
-import java.nio.channels.SocketChannel;
-
-import org.apache.geode.distributed.internal.DMStats;
-
-/**
- * Prior to transmitting a buffer or processing a received buffer
- * a NioFilter should be called to wrap (transmit) or unwrap (received)
- * the buffer in case SSL is being used.
- */
-public interface NioFilter {
-
-  /**
-   * wrap bytes for transmission to another process
-   */
-  ByteBuffer wrap(ByteBuffer buffer) throws IOException;
-
-  /**
-   * unwrap bytes received from another process. The unwrapped
-   * buffer should be flipped before reading. When done reading invoke
-   * doneReading() to reset for future read ops
-   */
-  ByteBuffer unwrap(ByteBuffer wrappedBuffer) throws IOException;
-
-  /**
-   * ensure that the wrapped buffer has enough room to read the given amount of data.
-   * This must be invoked before readAtLeast. A new buffer may be returned by this method.
-   */
-  ByteBuffer ensureWrappedCapacity(int amount, ByteBuffer wrappedBuffer,
-      Buffers.BufferType bufferType, DMStats stats);
-
-  /**
-   * read at least the indicated amount of bytes from the given
-   * socket. The buffer position will be ready for reading
-   * the data when this method returns. Note: you must invoke ensureWrappedCapacity
-   * with the given amount prior to each invocation of this method.
-   * <br>
-   * wrappedBuffer = filter.ensureWrappedCapacity(amount, wrappedBuffer, etc.);<br>
-   * unwrappedBuffer = filter.readAtLeast(channel, amount, wrappedBuffer, etc.)
-   */
-  ByteBuffer readAtLeast(SocketChannel channel, int amount, ByteBuffer wrappedBuffer,
-      DMStats stats) throws IOException;
-
-  /**
-   * You must invoke this when done reading from the unwrapped buffer
-   */
-  default void doneReading(ByteBuffer unwrappedBuffer) {
-    if (unwrappedBuffer.position() != 0) {
-      unwrappedBuffer.compact();
-    } else {
-      unwrappedBuffer.position(unwrappedBuffer.limit());
-      unwrappedBuffer.limit(unwrappedBuffer.capacity());
-    }
-  }
-
-  /**
-   * invoke this method when you are done using the NioFilter
-   *
-   */
-  default void close(SocketChannel socketChannel) {
-    // nothing by default
-  }
-
-  /**
-   * returns the unwrapped byte buffer associated with the given wrapped buffer
-   */
-  ByteBuffer getUnwrappedBuffer(ByteBuffer wrappedBuffer);
-
-
-}
diff --git a/geode-core/src/main/java/org/apache/geode/internal/net/NioPlainEngine.java b/geode-core/src/main/java/org/apache/geode/internal/net/NioPlainEngine.java
deleted file mode 100644
index 972c854..0000000
--- a/geode-core/src/main/java/org/apache/geode/internal/net/NioPlainEngine.java
+++ /dev/null
@@ -1,125 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
- * agreements. See the NOTICE file distributed with this work for additional information regarding
- * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance with the License. You may obtain a
- * copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software distributed under the License
- * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
- * or implied. See the License for the specific language governing permissions and limitations under
- * the License.
- */
-package org.apache.geode.internal.net;
-
-
-import java.io.EOFException;
-import java.io.IOException;
-import java.nio.ByteBuffer;
-import java.nio.channels.SocketChannel;
-
-import org.apache.logging.log4j.Logger;
-
-import org.apache.geode.distributed.internal.DMStats;
-import org.apache.geode.internal.Assert;
-import org.apache.geode.internal.logging.LogService;
-
-/**
- * A pass-through implementation of NioFilter. Use this if you don't need
- * secure communications.
- */
-public class NioPlainEngine implements NioFilter {
-  private static final Logger logger = LogService.getLogger();
-
-  int lastReadPosition;
-  int lastProcessedPosition;
-
-
-  public NioPlainEngine() {}
-
-  @Override
-  public ByteBuffer wrap(ByteBuffer buffer) {
-    return buffer;
-  }
-
-  @Override
-  public ByteBuffer unwrap(ByteBuffer wrappedBuffer) {
-    wrappedBuffer.position(wrappedBuffer.limit());
-    return wrappedBuffer;
-  }
-
-  @Override
-  public ByteBuffer ensureWrappedCapacity(int amount, ByteBuffer wrappedBuffer,
-      Buffers.BufferType bufferType, DMStats stats) {
-    ByteBuffer buffer = wrappedBuffer;
-
-    if (buffer.capacity() > amount) {
-      // we already have a buffer that's big enough
-      if (buffer.capacity() - lastProcessedPosition < amount) {
-        buffer.limit(lastReadPosition);
-        buffer.position(lastProcessedPosition);
-        buffer.compact();
-        lastReadPosition = buffer.position();
-        lastProcessedPosition = 0;
-      }
-    } else {
-      ByteBuffer oldBuffer = buffer;
-      oldBuffer.limit(lastReadPosition);
-      oldBuffer.position(lastProcessedPosition);
-      buffer = Buffers.acquireBuffer(bufferType, amount, stats);
-      buffer.clear();
-      buffer.put(oldBuffer);
-      Buffers.releaseBuffer(bufferType, oldBuffer, stats);
-      lastReadPosition = buffer.position();
-      lastProcessedPosition = 0;
-    }
-    return buffer;
-  }
-
-  @Override
-  public ByteBuffer readAtLeast(SocketChannel channel, int bytes, ByteBuffer wrappedBuffer,
-      DMStats stats) throws IOException {
-    ByteBuffer buffer = wrappedBuffer;
-
-    Assert.assertTrue(buffer.capacity() - lastProcessedPosition >= bytes);
-
-    // read into the buffer starting at the end of valid data
-    buffer.limit(buffer.capacity());
-    buffer.position(lastReadPosition);
-
-    while (buffer.position() < (lastProcessedPosition + bytes)) {
-      int amountRead = channel.read(buffer);
-      if (amountRead < 0) {
-        throw new EOFException();
-      }
-    }
-
-    // keep track of how much of the buffer contains valid data with lastReadPosition
-    lastReadPosition = buffer.position();
-
-    // set up the buffer for reading and keep track of how much has been consumed with
-    // lastProcessedPosition
-    buffer.limit(lastProcessedPosition + bytes);
-    buffer.position(lastProcessedPosition);
-    lastProcessedPosition += bytes;
-
-    return buffer;
-  }
-
-  public void doneReading(ByteBuffer unwrappedBuffer) {
-    if (unwrappedBuffer.position() != 0) {
-      unwrappedBuffer.compact();
-    } else {
-      unwrappedBuffer.position(unwrappedBuffer.limit());
-      unwrappedBuffer.limit(unwrappedBuffer.capacity());
-    }
-  }
-
-  @Override
-  public ByteBuffer getUnwrappedBuffer(ByteBuffer wrappedBuffer) {
-    return wrappedBuffer;
-  }
-
-}
diff --git a/geode-core/src/main/java/org/apache/geode/internal/net/NioSslEngine.java b/geode-core/src/main/java/org/apache/geode/internal/net/NioSslEngine.java
deleted file mode 100644
index 14c32fa..0000000
--- a/geode-core/src/main/java/org/apache/geode/internal/net/NioSslEngine.java
+++ /dev/null
@@ -1,415 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
- * agreements. See the NOTICE file distributed with this work for additional information regarding
- * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance with the License. You may obtain a
- * copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software distributed under the License
- * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
- * or implied. See the License for the specific language governing permissions and limitations under
- * the License.
- */
-package org.apache.geode.internal.net;
-
-import static javax.net.ssl.SSLEngineResult.HandshakeStatus.FINISHED;
-import static javax.net.ssl.SSLEngineResult.HandshakeStatus.NEED_TASK;
-import static javax.net.ssl.SSLEngineResult.HandshakeStatus.NEED_UNWRAP;
-import static javax.net.ssl.SSLEngineResult.Status.BUFFER_OVERFLOW;
-import static javax.net.ssl.SSLEngineResult.Status.OK;
-import static org.apache.geode.internal.net.Buffers.BufferType.TRACKED_RECEIVER;
-import static org.apache.geode.internal.net.Buffers.BufferType.TRACKED_SENDER;
-import static org.apache.geode.internal.net.Buffers.releaseBuffer;
-
-import java.io.EOFException;
-import java.io.IOException;
-import java.net.SocketException;
-import java.net.SocketTimeoutException;
-import java.nio.ByteBuffer;
-import java.nio.channels.ClosedChannelException;
-import java.nio.channels.SocketChannel;
-import java.util.concurrent.TimeUnit;
-
-import javax.net.ssl.SSLEngine;
-import javax.net.ssl.SSLEngineResult;
-import javax.net.ssl.SSLException;
-import javax.net.ssl.SSLHandshakeException;
-import javax.net.ssl.SSLSession;
-
-import org.apache.logging.log4j.Logger;
-
-import org.apache.geode.GemFireIOException;
-import org.apache.geode.distributed.internal.DMStats;
-import org.apache.geode.internal.logging.LogService;
-
-
-/**
- * NioSslEngine uses an SSLEngine to bind SSL logic to a data source. This class is not thread
- * safe. Its use should be confined to one thread or should be protected by external
- * synchronization.
- */
-public class NioSslEngine implements NioFilter {
-  private static final Logger logger = LogService.getLogger();
-
-  private final DMStats stats;
-
-  private volatile boolean closed;
-
-  SSLEngine engine;
-
-  /**
-   * myNetData holds bytes wrapped by the SSLEngine
-   */
-  ByteBuffer myNetData;
-
-  /**
-   * peerAppData holds the last unwrapped data from a peer
-   */
-  ByteBuffer peerAppData;
-
-  /**
-   * buffer used to receive data during TLS handshake
-   */
-  ByteBuffer handshakeBuffer;
-
-  NioSslEngine(SSLEngine engine, DMStats stats) {
-    this.stats = stats;
-    SSLSession session = engine.getSession();
-    int appBufferSize = session.getApplicationBufferSize();
-    int packetBufferSize = engine.getSession().getPacketBufferSize();
-    this.myNetData = ByteBuffer.allocate(packetBufferSize);
-    this.peerAppData = ByteBuffer.allocate(appBufferSize);
-    this.engine = engine;
-  }
-
-  /**
-   * This will throw an SSLHandshakeException if the handshake doesn't terminate in a FINISHED
-   * state. It may throw other IOExceptions caused by I/O operations
-   */
-  public boolean handshake(SocketChannel socketChannel, int timeout,
-      ByteBuffer peerNetData)
-      throws IOException, InterruptedException {
-
-    if (peerNetData.capacity() < engine.getSession().getPacketBufferSize()) {
-      if (logger.isDebugEnabled()) {
-        logger.debug("Allocating new buffer for SSL handshake");
-      }
-      this.handshakeBuffer =
-          Buffers.acquireReceiveBuffer(engine.getSession().getPacketBufferSize(), stats);
-    } else {
-      this.handshakeBuffer = peerNetData;
-    }
-    this.handshakeBuffer.clear();
-
-    ByteBuffer myAppData = ByteBuffer.wrap(new byte[0]);
-
-    if (logger.isDebugEnabled()) {
-      logger.debug("Starting TLS handshake with {}.  Timeout is {}ms", socketChannel.socket(),
-          timeout);
-    }
-
-    long timeoutNanos = -1;
-    if (timeout > 0) {
-      timeoutNanos = System.nanoTime() + TimeUnit.MILLISECONDS.toNanos(timeout);
-    }
-
-    // Begin handshake
-    engine.beginHandshake();
-    SSLEngineResult.HandshakeStatus status = engine.getHandshakeStatus();
-    SSLEngineResult engineResult = null;
-
-    // Process handshaking message
-    while (status != FINISHED &&
-        status != SSLEngineResult.HandshakeStatus.NOT_HANDSHAKING) {
-      if (socketChannel.socket().isClosed()) {
-        logger.info("Handshake terminated because socket is closed");
-        throw new SocketException("handshake terminated - socket is closed");
-      }
-
-      if (timeoutNanos > 0) {
-        if (timeoutNanos < System.nanoTime()) {
-          logger.info("TLS handshake is timing out");
-          throw new SocketTimeoutException("handshake timed out");
-        }
-      }
-
-      switch (status) {
-        case NEED_UNWRAP:
-          // Receive handshaking data from peer
-          int dataRead = socketChannel.read(handshakeBuffer);
-
-          // Process incoming handshaking data
-          handshakeBuffer.flip();
-          engineResult = engine.unwrap(handshakeBuffer, peerAppData);
-          handshakeBuffer.compact();
-          status = engineResult.getHandshakeStatus();
-
-          // if we're not finished, there's nothing to process and no data was read let's hang out
-          // for a little
-          if (peerAppData.remaining() == 0 && dataRead == 0 && status == NEED_UNWRAP) {
-            Thread.sleep(10);
-          }
-
-          if (engineResult.getStatus() == BUFFER_OVERFLOW) {
-            peerAppData =
-                expandWriteBuffer(TRACKED_RECEIVER, peerAppData, peerAppData.capacity() * 2,
-                    stats);
-          }
-          break;
-
-        case NEED_WRAP:
-          // Empty the local network packet buffer.
-          myNetData.clear();
-
-          // Generate handshaking data
-          engineResult = engine.wrap(myAppData, myNetData);
-          status = engineResult.getHandshakeStatus();
-
-          // Check status
-          switch (engineResult.getStatus()) {
-            case BUFFER_OVERFLOW:
-              myNetData =
-                  expandWriteBuffer(TRACKED_SENDER, myNetData,
-                      myNetData.capacity() * 2, stats);
-              break;
-            case OK:
-              myNetData.flip();
-              // Send the handshaking data to peer
-              while (myNetData.hasRemaining()) {
-                socketChannel.write(myNetData);
-              }
-              break;
-            case CLOSED:
-              break;
-            default:
-              logger.info("handshake terminated with illegal state due to {}", status);
-              throw new IllegalStateException(
-                  "Unknown SSLEngineResult status: " + engineResult.getStatus());
-          }
-          break;
-        case NEED_TASK:
-          // Handle blocking tasks
-          handleBlockingTasks();
-          status = engine.getHandshakeStatus();
-          break;
-        default:
-          logger.info("handshake terminated with illegal state due to {}", status);
-          throw new IllegalStateException("Unknown SSL Handshake state: " + status);
-      }
-      Thread.sleep(10);
-    }
-    if (status != FINISHED) {
-      logger.info("handshake terminated with exception due to {}", status);
-      throw new SSLHandshakeException("SSL Handshake terminated with status " + status);
-    }
-    if (logger.isDebugEnabled()) {
-      if (engineResult != null) {
-        logger.debug("TLS handshake successful.  result={} and handshakeResult={}",
-            engineResult.getStatus(), engine.getHandshakeStatus());
-      } else {
-        logger.debug("TLS handshake successful.  handshakeResult={}",
-            engine.getHandshakeStatus());
-      }
-    }
-    return true;
-  }
-
-  ByteBuffer expandWriteBuffer(Buffers.BufferType type, ByteBuffer existing,
-      int desiredCapacity, DMStats stats) {
-    return Buffers.expandWriteBufferIfNeeded(type, existing, desiredCapacity, stats);
-  }
-
-  void checkClosed() {
-    if (closed) {
-      throw new IllegalStateException("NioSslEngine has been closed");
-    }
-  }
-
-  void handleBlockingTasks() {
-    Runnable task;
-    while ((task = engine.getDelegatedTask()) != null) {
-      // these tasks could be run in other threads but the SSLEngine will block until they finish
-      task.run();
-    }
-  }
-
-  @Override
-  public synchronized ByteBuffer wrap(ByteBuffer appData) throws IOException {
-    checkClosed();
-
-    myNetData.clear();
-
-    while (appData.hasRemaining()) {
-      // ensure we have lots of capacity since encrypted data might
-      // be larger than the app data
-      int remaining = myNetData.capacity() - myNetData.position();
-
-      if (remaining < (appData.remaining() * 2)) {
-        int newCapacity = expandedCapacity(appData, myNetData);
-        myNetData = expandWriteBuffer(TRACKED_SENDER, myNetData, newCapacity, stats);
-      }
-
-      SSLEngineResult wrapResult = engine.wrap(appData, myNetData);
-
-      if (wrapResult.getHandshakeStatus() == NEED_TASK) {
-        handleBlockingTasks();
-      }
-
-      if (wrapResult.getStatus() != OK) {
-        throw new SSLException("Error encrypting data: " + wrapResult);
-      }
-    }
-
-    myNetData.flip();
-
-    return myNetData;
-  }
-
-  @Override
-  public synchronized ByteBuffer unwrap(ByteBuffer wrappedBuffer) throws IOException {
-    checkClosed();
-
-    // note that we do not clear peerAppData as it may hold a partial
-    // message. TcpConduit, for instance, uses message chunking to
-    // transmit large payloads and we may have read a partial chunk
-    // during the previous unwrap
-
-    // it's better to be pro-active about avoiding buffer overflows
-    expandPeerAppData(wrappedBuffer);
-    peerAppData.limit(peerAppData.capacity());
-    while (wrappedBuffer.hasRemaining()) {
-      SSLEngineResult unwrapResult = engine.unwrap(wrappedBuffer, peerAppData);
-      switch (unwrapResult.getStatus()) {
-        case BUFFER_OVERFLOW:
-          expandPeerAppData(wrappedBuffer);
-          break;
-        case BUFFER_UNDERFLOW:
-          // partial data - need to read more. When this happens the SSLEngine will not have
-          // changed the buffer position
-          wrappedBuffer.compact();
-          return peerAppData;
-        case OK:
-          break;
-        default:
-          throw new SSLException("Error decrypting data: " + unwrapResult);
-      }
-    }
-    wrappedBuffer.clear();
-    return peerAppData;
-  }
-
-  void expandPeerAppData(ByteBuffer wrappedBuffer) {
-    if (peerAppData.capacity() - peerAppData.position() < 2 * wrappedBuffer.remaining()) {
-      peerAppData =
-          Buffers.expandWriteBufferIfNeeded(TRACKED_RECEIVER, peerAppData,
-              expandedCapacity(wrappedBuffer, peerAppData), stats);
-    }
-  }
-
-  @Override
-  public ByteBuffer ensureWrappedCapacity(int amount, ByteBuffer wrappedBuffer,
-      Buffers.BufferType bufferType, DMStats stats) {
-    if (wrappedBuffer == null) {
-      wrappedBuffer = Buffers.acquireBuffer(bufferType, amount, stats);
-    }
-    return wrappedBuffer;
-  }
-
-  @Override
-  public ByteBuffer readAtLeast(SocketChannel channel, int bytes,
-      ByteBuffer wrappedBuffer, DMStats stats) throws IOException {
-    if (peerAppData.capacity() > bytes) {
-      // we already have a buffer that's big enough
-      if (peerAppData.capacity() - peerAppData.position() < bytes) {
-        peerAppData.compact();
-        peerAppData.flip();
-      }
-    } else {
-      peerAppData =
-          Buffers.expandReadBufferIfNeeded(TRACKED_RECEIVER, peerAppData, bytes, this.stats);
-    }
-
-    while (peerAppData.remaining() < bytes) {
-      wrappedBuffer.limit(wrappedBuffer.capacity());
-      int amountRead = channel.read(wrappedBuffer);
-      if (amountRead < 0) {
-        throw new EOFException();
-      }
-      if (amountRead > 0) {
-        wrappedBuffer.flip();
-        // prep the decoded buffer for writing
-        peerAppData.compact();
-        peerAppData = unwrap(wrappedBuffer);
-        // done writing to the decoded buffer - prep it for reading again
-        peerAppData.flip();
-      }
-    }
-    return peerAppData;
-  }
-
-  @Override
-  public ByteBuffer getUnwrappedBuffer(ByteBuffer wrappedBuffer) {
-    return peerAppData;
-  }
-
-  /**
-   * ensures that the unwrapped buffer associated with the given wrapped buffer has
-   * sufficient capacity for the given amount of bytes. This may compact the
-   * buffer or it may return a new buffer.
-   */
-  public ByteBuffer ensureUnwrappedCapacity(int amount) {
-    // for TTLS the app-data buffers do not need to be tracked direct-buffers since we
-    // do not use them for I/O operations
-    peerAppData =
-        Buffers.expandReadBufferIfNeeded(TRACKED_RECEIVER, peerAppData, amount, this.stats);
-    return peerAppData;
-  }
-
-
-  @Override
-  public void close(SocketChannel socketChannel) {
-    if (closed) {
-      return;
-    }
-    try {
-
-      if (!engine.isOutboundDone()) {
-        ByteBuffer empty = ByteBuffer.wrap(new byte[0]);
-        engine.closeOutbound();
-
-        // clear the buffer to receive a CLOSE message from the SSLEngine
-        myNetData.clear();
-
-        // Get close message
-        SSLEngineResult result = engine.wrap(empty, myNetData);
-
-        if (result.getStatus() != SSLEngineResult.Status.CLOSED) {
-          throw new SSLHandshakeException(
-              "Error closing SSL session.  Status=" + result.getStatus());
-        }
-
-        // Send close message to peer
-        myNetData.flip();
-        while (myNetData.hasRemaining()) {
-          socketChannel.write(myNetData);
-        }
-      }
-    } catch (ClosedChannelException e) {
-      // we can't send a close message if the channel is closed
-    } catch (IOException e) {
-      throw new GemFireIOException("exception closing SSL session", e);
-    } finally {
-      releaseBuffer(TRACKED_SENDER, myNetData, stats);
-      releaseBuffer(TRACKED_RECEIVER, peerAppData, stats);
-      this.closed = true;
-    }
-  }
-
-  private int expandedCapacity(ByteBuffer sourceBuffer, ByteBuffer targetBuffer) {
-    return Math.max(targetBuffer.position() + sourceBuffer.remaining() * 2,
-        targetBuffer.capacity() * 2);
-  }
-
-}
diff --git a/geode-core/src/main/java/org/apache/geode/internal/net/SocketCreator.java b/geode-core/src/main/java/org/apache/geode/internal/net/SocketCreator.java
index 6e39373..1aa28ed 100755
--- a/geode-core/src/main/java/org/apache/geode/internal/net/SocketCreator.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/net/SocketCreator.java
@@ -29,9 +29,7 @@ import java.net.SocketAddress;
 import java.net.SocketException;
 import java.net.SocketTimeoutException;
 import java.net.UnknownHostException;
-import java.nio.ByteBuffer;
 import java.nio.channels.ServerSocketChannel;
-import java.nio.channels.SocketChannel;
 import java.security.GeneralSecurityException;
 import java.security.KeyStore;
 import java.security.KeyStoreException;
@@ -65,7 +63,6 @@ import javax.net.ssl.KeyManager;
 import javax.net.ssl.KeyManagerFactory;
 import javax.net.ssl.SSLContext;
 import javax.net.ssl.SSLEngine;
-import javax.net.ssl.SSLException;
 import javax.net.ssl.SSLHandshakeException;
 import javax.net.ssl.SSLParameters;
 import javax.net.ssl.SSLPeerUnverifiedException;
@@ -87,7 +84,6 @@ import org.apache.geode.annotations.internal.MakeNotStatic;
 import org.apache.geode.cache.wan.GatewaySender;
 import org.apache.geode.cache.wan.GatewayTransportFilter;
 import org.apache.geode.distributed.ClientSocketFactory;
-import org.apache.geode.distributed.internal.DMStats;
 import org.apache.geode.distributed.internal.DistributionConfig;
 import org.apache.geode.distributed.internal.DistributionConfigImpl;
 import org.apache.geode.distributed.internal.InternalDistributedSystem;
@@ -99,6 +95,7 @@ import org.apache.geode.internal.admin.SSLConfig;
 import org.apache.geode.internal.cache.wan.TransportFilterServerSocket;
 import org.apache.geode.internal.cache.wan.TransportFilterSocketFactory;
 import org.apache.geode.internal.logging.LogService;
+import org.apache.geode.internal.security.SecurableCommunicationChannel;
 import org.apache.geode.internal.util.ArgumentRedactor;
 import org.apache.geode.internal.util.PasswordUtil;
 
@@ -171,6 +168,11 @@ public class SocketCreator {
   public static volatile boolean use_client_host_name = true;
 
   /**
+   * True if this SocketCreator has been initialized and is ready to use
+   */
+  private boolean ready = false;
+
+  /**
    * Only print this SocketCreator's config once
    */
   private boolean configShown = false;
@@ -229,9 +231,6 @@ public class SocketCreator {
             SocketCreator.useIPv6Addresses = true;
           }
         }
-        if (inetAddress == null) {
-          inetAddress = InetAddress.getLocalHost();
-        }
       }
     } catch (UnknownHostException e) {
     }
@@ -340,6 +339,18 @@ public class SocketCreator {
    */
   private void initialize() {
     try {
+      // set p2p values...
+      if (SecurableCommunicationChannel.CLUSTER
+          .equals(sslConfig.getSecuredCommunicationChannel())) {
+        if (this.sslConfig.isEnabled()) {
+          System.setProperty("p2p.useSSL", "true");
+          System.setProperty("p2p.oldIO", "true");
+          System.setProperty("p2p.nodirectBuffers", "true");
+        } else {
+          System.setProperty("p2p.useSSL", "false");
+        }
+      }
+
       try {
         if (this.sslConfig.isEnabled() && sslContext == null) {
           sslContext = createAndConfigureSSLContext();
@@ -352,7 +363,7 @@ public class SocketCreator {
       org.apache.geode.internal.tcp.TCPConduit.init();
 
       initializeClientSocketFactory();
-
+      this.ready = true;
     } catch (VirtualMachineError err) {
       SystemFailure.initiateFailure(err);
       // If this ever returns, rethrow the error. We're poisoned
@@ -528,7 +539,6 @@ public class SocketCreator {
           System.getProperty("user.home") + System.getProperty("file.separator") + ".keystore";
     }
 
-
     FileInputStream fileInputStream = new FileInputStream(keyStoreFilePath);
     String passwordString = sslConfig.getKeystorePassword();
     char[] password = null;
@@ -632,12 +642,6 @@ public class SocketCreator {
     }
 
     @Override
-    public String chooseEngineClientAlias(String[] keyTypes, Principal[] principals,
-        SSLEngine sslEngine) {
-      return delegate.chooseEngineClientAlias(keyTypes, principals, sslEngine);
-    }
-
-    @Override
     public String chooseEngineServerAlias(final String keyType, final Principal[] principals,
         final SSLEngine sslEngine) {
       if (!StringUtils.isEmpty(this.keyAlias)) {
@@ -867,6 +871,14 @@ public class SocketCreator {
   }
 
   /**
+   * Return a client socket. This method is used by peers.
+   */
+  public Socket connectForServer(InetAddress inetadd, int port, int socketBufferSize)
+      throws IOException {
+    return connect(inetadd, port, 0, null, false, socketBufferSize);
+  }
+
+  /**
    * Return a client socket, timing out if unable to connect and timeout > 0 (millis). The parameter
    * <i>timeout</i> is ignored if SSL is being used, as there is no timeout argument in the ssl
    * socket factory
@@ -955,76 +967,6 @@ public class SocketCreator {
   }
 
   /**
-   * Returns an SSLEngine that can be used to perform TLS handshakes and communication
-   */
-  public SSLEngine createSSLEngine(String hostName, int port) {
-    return sslContext.createSSLEngine(hostName, port);
-  }
-
-  /**
-   * @see <a
-   *      href=https://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html#SSLENG">JSSE
-   *      Reference Guide</a>
-   *
-   * @param socketChannel the socket's NIO channel
-   * @param engine the sslEngine (see createSSLEngine)
-   * @param timeout handshake timeout in milliseconds. No timeout if <= 0
-   * @param clientSocket set to true if you initiated the connect(), false if you accepted it
-   * @param peerNetBuffer the buffer to use in reading data fron socketChannel. This should also be
-   *        used in subsequent I/O operations
-   * @return The SSLEngine to be used in processing data for sending/receiving from the channel
-   */
-  public NioSslEngine handshakeSSLSocketChannel(SocketChannel socketChannel, SSLEngine engine,
-      int timeout,
-      boolean clientSocket,
-      ByteBuffer peerNetBuffer,
-      DMStats stats)
-      throws IOException {
-    engine.setUseClientMode(clientSocket);
-    while (!socketChannel.finishConnect()) {
-      try {
-        Thread.sleep(50);
-      } catch (InterruptedException e) {
-        if (!socketChannel.socket().isClosed()) {
-          socketChannel.close();
-        }
-        throw new IOException("Interrupted while performing handshake", e);
-      }
-    }
-
-    NioSslEngine nioSslEngine = new NioSslEngine(engine, stats);
-
-    boolean blocking = socketChannel.isBlocking();
-    if (blocking) {
-      socketChannel.configureBlocking(false);
-    }
-
-    try {
-      nioSslEngine.handshake(socketChannel, timeout, peerNetBuffer);
-    } catch (SSLException e) {
-      if (!socketChannel.socket().isClosed()) {
-        socketChannel.close();
-      }
-      logger.warn("SSL handshake exception", e);
-      throw e;
-    } catch (InterruptedException e) {
-      if (!socketChannel.socket().isClosed()) {
-        socketChannel.close();
-      }
-      throw new IOException("SSL handshake interrupted");
-    } finally {
-      if (blocking) {
-        try {
-          socketChannel.configureBlocking(true);
-        } catch (IOException ignored) {
-          // problem setting the socket back to blocking mode but the socket's going to be closed
-        }
-      }
-    }
-    return nioSslEngine;
-  }
-
-  /**
    * Use this method to perform the SSL handshake on a newly accepted socket. Non-SSL
    * sockets are ignored by this method.
    *
@@ -1142,13 +1084,13 @@ public class SocketCreator {
         }
       } catch (SSLHandshakeException ex) {
         logger
-            .fatal(String.format("Problem forming SSL connection to %s[%s].",
-                socket.getInetAddress(), Integer.valueOf(socket.getPort())),
+            .fatal(String.format("SSL Error in connecting to peer %s[%s].",
+                new Object[] {socket.getInetAddress(), Integer.valueOf(socket.getPort())}),
                 ex);
         throw ex;
       } catch (SSLPeerUnverifiedException ex) {
         if (this.sslConfig.isRequireAuth()) {
-          logger.fatal("SSL authentication exception.", ex);
+          logger.fatal("SSL Error in authenticating peer.", ex);
           throw ex;
         }
       }
diff --git a/geode-core/src/main/java/org/apache/geode/internal/statistics/VMStats.java b/geode-core/src/main/java/org/apache/geode/internal/statistics/VMStats.java
index 077291f..a1214ed 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/statistics/VMStats.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/statistics/VMStats.java
@@ -38,7 +38,7 @@ public class VMStats implements VMStatsContract {
             f.createIntGauge("cpus", "Number of cpus available to the java VM on its machine.",
                 "cpus", true),
             f.createLongGauge("freeMemory",
-                "An approximation of the total amount of memory currently available for future allocated objects, measured in bytes.",
+                "An approximation fo the total amount of memory currently available for future allocated objects, measured in bytes.",
                 "bytes", true),
             f.createLongGauge("totalMemory",
                 "The total amount of memory currently available for current and future objects, measured in bytes.",
diff --git a/geode-core/src/main/java/org/apache/geode/internal/statistics/platform/LinuxProcFsStatistics.java b/geode-core/src/main/java/org/apache/geode/internal/statistics/platform/LinuxProcFsStatistics.java
index 6b8c92d..3f687ca 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/statistics/platform/LinuxProcFsStatistics.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/statistics/platform/LinuxProcFsStatistics.java
@@ -64,8 +64,7 @@ public class LinuxProcFsStatistics {
   private static boolean hasProcVmStat;
   @MakeNotStatic
   private static boolean hasDiskStats;
-  @MakeNotStatic
-  static SpaceTokenizer tokenizer;
+  static SpaceTokenizer st;
 
   /** The number of non-process files in /proc */
   @MakeNotStatic
@@ -92,12 +91,13 @@ public class LinuxProcFsStatistics {
     cpuStatSingleton = new CpuStat();
     hasProcVmStat = new File("/proc/vmstat").exists();
     hasDiskStats = new File("/proc/diskstats").exists();
-    tokenizer = new SpaceTokenizer();
+    st = new SpaceTokenizer();
     return 0;
   }
 
   public static void close() { // TODO: was package-protected
     cpuStatSingleton = null;
+    st = null;
   }
 
   public static void readyRefresh() { // TODO: was package-protected
@@ -125,11 +125,10 @@ public class LinuxProcFsStatistics {
       if (line == null) {
         return;
       }
-      tokenizer.setString(line);
-      tokenizer.skipTokens(22);
-      ints[LinuxProcessStats.imageSizeINT] = (int) (tokenizer.nextTokenAsLong() / OneMeg);
-      ints[LinuxProcessStats.rssSizeINT] =
-          (int) ((tokenizer.nextTokenAsLong() * pageSize) / OneMeg);
+      st.setString(line);
+      st.skipTokens(22);
+      ints[LinuxProcessStats.imageSizeINT] = (int) (st.nextTokenAsLong() / OneMeg);
+      ints[LinuxProcessStats.rssSizeINT] = (int) ((st.nextTokenAsLong() * pageSize) / OneMeg);
     } catch (NoSuchElementException nsee) {
       // It might just be a case of the process going away while we
       // where trying to get its stats.
@@ -141,7 +140,7 @@ public class LinuxProcFsStatistics {
       // So for now lets just ignore the failure and leave the stats
       // as they are.
     } finally {
-      tokenizer.releaseResources();
+      st.releaseResources();
       if (br != null)
         try {
           br.close();
@@ -152,10 +151,6 @@ public class LinuxProcFsStatistics {
 
   public static void refreshSystem(int[] ints, long[] longs, double[] doubles) { // TODO: was
                                                                                  // package-protected
-    if (cpuStatSingleton == null) {
-      // stats have been closed or haven't been properly initialized
-      return;
-    }
     ints[LinuxSystemStats.processesINT] = getProcessCount();
     ints[LinuxSystemStats.cpusINT] = sys_cpus;
     InputStreamReader isr = null;
@@ -220,7 +215,7 @@ public class LinuxProcFsStatistics {
     if (hasProcVmStat) {
       getVmStats(longs);
     }
-    tokenizer.releaseResources();
+    st.releaseResources();
   }
 
   // Example of /proc/loadavg
@@ -235,14 +230,14 @@ public class LinuxProcFsStatistics {
       if (line == null) {
         return;
       }
-      tokenizer.setString(line);
-      doubles[LinuxSystemStats.loadAverage1DOUBLE] = tokenizer.nextTokenAsDouble();
-      doubles[LinuxSystemStats.loadAverage5DOUBLE] = tokenizer.nextTokenAsDouble();
-      doubles[LinuxSystemStats.loadAverage15DOUBLE] = tokenizer.nextTokenAsDouble();
+      st.setString(line);
+      doubles[LinuxSystemStats.loadAverage1DOUBLE] = st.nextTokenAsDouble();
+      doubles[LinuxSystemStats.loadAverage5DOUBLE] = st.nextTokenAsDouble();
+      doubles[LinuxSystemStats.loadAverage15DOUBLE] = st.nextTokenAsDouble();
     } catch (NoSuchElementException nsee) {
     } catch (IOException ioe) {
     } finally {
-      tokenizer.releaseResources();
+      st.releaseResources();
       if (br != null)
         try {
           br.close();
@@ -300,41 +295,41 @@ public class LinuxProcFsStatistics {
       while ((line = br.readLine()) != null) {
         try {
           if (line.startsWith("MemTotal: ")) {
-            tokenizer.setString(line);
-            tokenizer.skipToken(); // Burn initial token
-            ints[LinuxSystemStats.physicalMemoryINT] = (int) (tokenizer.nextTokenAsLong() / 1024);
+            st.setString(line);
+            st.skipToken(); // Burn initial token
+            ints[LinuxSystemStats.physicalMemoryINT] = (int) (st.nextTokenAsLong() / 1024);
           } else if (line.startsWith("MemFree: ")) {
-            tokenizer.setString(line);
-            tokenizer.skipToken(); // Burn initial token
-            ints[LinuxSystemStats.freeMemoryINT] = (int) (tokenizer.nextTokenAsLong() / 1024);
+            st.setString(line);
+            st.skipToken(); // Burn initial token
+            ints[LinuxSystemStats.freeMemoryINT] = (int) (st.nextTokenAsLong() / 1024);
           } else if (line.startsWith("SharedMem: ")) {
-            tokenizer.setString(line);
-            tokenizer.skipToken(); // Burn initial token
-            ints[LinuxSystemStats.sharedMemoryINT] = (int) (tokenizer.nextTokenAsLong() / 1024);
+            st.setString(line);
+            st.skipToken(); // Burn initial token
+            ints[LinuxSystemStats.sharedMemoryINT] = (int) (st.nextTokenAsLong() / 1024);
           } else if (line.startsWith("Buffers: ")) {
-            tokenizer.setString(line);
-            tokenizer.nextToken(); // Burn initial token
-            ints[LinuxSystemStats.bufferMemoryINT] = (int) (tokenizer.nextTokenAsLong() / 1024);
+            st.setString(line);
+            st.nextToken(); // Burn initial token
+            ints[LinuxSystemStats.bufferMemoryINT] = (int) (st.nextTokenAsLong() / 1024);
           } else if (line.startsWith("SwapTotal: ")) {
-            tokenizer.setString(line);
-            tokenizer.skipToken(); // Burn initial token
-            ints[LinuxSystemStats.allocatedSwapINT] = (int) (tokenizer.nextTokenAsLong() / 1024);
+            st.setString(line);
+            st.skipToken(); // Burn initial token
+            ints[LinuxSystemStats.allocatedSwapINT] = (int) (st.nextTokenAsLong() / 1024);
           } else if (line.startsWith("SwapFree: ")) {
-            tokenizer.setString(line);
-            tokenizer.skipToken(); // Burn initial token
-            ints[LinuxSystemStats.unallocatedSwapINT] = (int) (tokenizer.nextTokenAsLong() / 1024);
+            st.setString(line);
+            st.skipToken(); // Burn initial token
+            ints[LinuxSystemStats.unallocatedSwapINT] = (int) (st.nextTokenAsLong() / 1024);
           } else if (line.startsWith("Cached: ")) {
-            tokenizer.setString(line);
-            tokenizer.skipToken(); // Burn initial token
-            ints[LinuxSystemStats.cachedMemoryINT] = (int) (tokenizer.nextTokenAsLong() / 1024);
+            st.setString(line);
+            st.skipToken(); // Burn initial token
+            ints[LinuxSystemStats.cachedMemoryINT] = (int) (st.nextTokenAsLong() / 1024);
           } else if (line.startsWith("Dirty: ")) {
-            tokenizer.setString(line);
-            tokenizer.skipToken(); // Burn initial token
-            ints[LinuxSystemStats.dirtyMemoryINT] = (int) (tokenizer.nextTokenAsLong() / 1024);
+            st.setString(line);
+            st.skipToken(); // Burn initial token
+            ints[LinuxSystemStats.dirtyMemoryINT] = (int) (st.nextTokenAsLong() / 1024);
           } else if (line.startsWith("Inact_dirty: ")) { // 2.4 kernels
-            tokenizer.setString(line);
-            tokenizer.skipToken(); // Burn initial token
-            ints[LinuxSystemStats.dirtyMemoryINT] = (int) (tokenizer.nextTokenAsLong() / 1024);
+            st.setString(line);
+            st.skipToken(); // Burn initial token
+            ints[LinuxSystemStats.dirtyMemoryINT] = (int) (st.nextTokenAsLong() / 1024);
           }
         } catch (NoSuchElementException nsee) {
           // ignore and let that stat not to be updated this time
@@ -342,7 +337,7 @@ public class LinuxProcFsStatistics {
       }
     } catch (IOException ioe) {
     } finally {
-      tokenizer.releaseResources();
+      st.releaseResources();
       if (br != null)
         try {
           br.close();
@@ -356,38 +351,38 @@ public class LinuxProcFsStatistics {
    * ListenOverflows=20 ListenDrops=21
    */
   private static void getNetStatStats(long[] longs, int[] ints) {
-    try (InputStreamReader isr = new InputStreamReader(new FileInputStream("/proc/net/netstat"))) {
-      BufferedReader br = new BufferedReader(isr);
+    InputStreamReader isr;
+    BufferedReader br = null;
+    try {
+      isr = new InputStreamReader(new FileInputStream("/proc/net/netstat"));
+      br = new BufferedReader(isr);
       String line;
       do {
         br.readLine(); // header
         line = br.readLine();
       } while (line != null && !line.startsWith("TcpExt:"));
 
-      tokenizer.setString(line);
-      tokenizer.skipTokens(1);
-      long tcpSyncookiesSent = tokenizer.nextTokenAsLong();
-      long tcpSyncookiesRecv = tokenizer.nextTokenAsLong();
-      tokenizer.skipTokens(17);
-      long tcpListenOverflows = tokenizer.nextTokenAsLong();
-      long tcpListenDrops = tokenizer.nextTokenAsLong();
+      st.setString(line);
+      st.skipTokens(1);
+      long tcpSyncookiesSent = st.nextTokenAsLong();
+      long tcpSyncookiesRecv = st.nextTokenAsLong();
+      st.skipTokens(17);
+      long tcpListenOverflows = st.nextTokenAsLong();
+      long tcpListenDrops = st.nextTokenAsLong();
 
       longs[LinuxSystemStats.tcpExtSynCookiesRecvLONG] = tcpSyncookiesRecv;
       longs[LinuxSystemStats.tcpExtSynCookiesSentLONG] = tcpSyncookiesSent;
       longs[LinuxSystemStats.tcpExtListenDropsLONG] = tcpListenDrops;
       longs[LinuxSystemStats.tcpExtListenOverflowsLONG] = tcpListenOverflows;
 
-      br.close();
-      br = null;
       if (!soMaxConnProcessed) {
-        try (InputStreamReader soMaxConnIsr =
-            new InputStreamReader(new FileInputStream("/proc/sys/net/core/somaxconn"))) {
-          BufferedReader br2 = new BufferedReader(soMaxConnIsr);
-          line = br2.readLine();
-          tokenizer.setString(line);
-          soMaxConn = tokenizer.nextTokenAsInt();
-          soMaxConnProcessed = true;
-        }
+        br.close();
+        isr = new InputStreamReader(new FileInputStream("/proc/sys/net/core/somaxconn"));
+        br = new BufferedReader(isr);
+        line = br.readLine();
+        st.setString(line);
+        soMaxConn = st.nextTokenAsInt();
+        soMaxConnProcessed = true;
       }
 
       ints[LinuxSystemStats.tcpSOMaxConnINT] = soMaxConn;
@@ -395,7 +390,13 @@ public class LinuxProcFsStatistics {
     } catch (NoSuchElementException nsee) {
     } catch (IOException ioe) {
     } finally {
-      tokenizer.releaseResources();
+      st.releaseResources();
+      if (br != null) {
+        try {
+          br.close();
+        } catch (IOException ignore) {
+        }
+      }
     }
   }
 
@@ -422,18 +423,18 @@ public class LinuxProcFsStatistics {
       while ((line = br.readLine()) != null) {
         int index = line.indexOf(":");
         boolean isloopback = (line.indexOf("lo:") != -1);
-        tokenizer.setString(line.substring(index + 1).trim());
-        long recv_bytes = tokenizer.nextTokenAsLong();
-        long recv_packets = tokenizer.nextTokenAsLong();
-        long recv_errs = tokenizer.nextTokenAsLong();
-        long recv_drop = tokenizer.nextTokenAsLong();
-        tokenizer.skipTokens(4); // fifo, frame, compressed, multicast
-        long xmit_bytes = tokenizer.nextTokenAsLong();
-        long xmit_packets = tokenizer.nextTokenAsLong();
-        long xmit_errs = tokenizer.nextTokenAsLong();
-        long xmit_drop = tokenizer.nextTokenAsLong();
-        tokenizer.skipToken(); // fifo
-        long xmit_colls = tokenizer.nextTokenAsLong();
+        st.setString(line.substring(index + 1).trim());
+        long recv_bytes = st.nextTokenAsLong();
+        long recv_packets = st.nextTokenAsLong();
+        long recv_errs = st.nextTokenAsLong();
+        long recv_drop = st.nextTokenAsLong();
+        st.skipTokens(4); // fifo, frame, compressed, multicast
+        long xmit_bytes = st.nextTokenAsLong();
+        long xmit_packets = st.nextTokenAsLong();
+        long xmit_errs = st.nextTokenAsLong();
+        long xmit_drop = st.nextTokenAsLong();
+        st.skipToken(); // fifo
+        long xmit_colls = st.nextTokenAsLong();
 
         if (isloopback) {
           lo_recv_packets = recv_packets;
@@ -470,7 +471,7 @@ public class LinuxProcFsStatistics {
     } catch (NoSuchElementException nsee) {
     } catch (IOException ioe) {
     } finally {
-      tokenizer.releaseResources();
+      st.releaseResources();
       if (br != null)
         try {
           br.close();
@@ -530,22 +531,22 @@ public class LinuxProcFsStatistics {
         br.readLine(); // Discard header info
       }
       while ((line = br.readLine()) != null) {
-        tokenizer.setString(line);
+        st.setString(line);
         {
           // " 8 1 sdb" on 2.6
           // " 8 1 452145145 sdb" on 2.4
-          String tok = tokenizer.nextToken();
+          String tok = st.nextToken();
           if (tok.length() == 0 || Character.isWhitespace(tok.charAt(0))) {
             // skip over first token since it is whitespace
-            tok = tokenizer.nextToken();
+            tok = st.nextToken();
           }
           // skip first token it is some number
-          tok = tokenizer.nextToken();
+          tok = st.nextToken();
           // skip second token it is some number
-          tok = tokenizer.nextToken();
+          tok = st.nextToken();
           if (!hasDiskStats) {
             // skip third token it is some number
-            tok = tokenizer.nextToken();
+            tok = st.nextToken();
           }
           // Now tok should be the device name.
           if (Character.isDigit(tok.charAt(tok.length() - 1))) {
@@ -554,20 +555,20 @@ public class LinuxProcFsStatistics {
             continue;
           }
         }
-        long tmp_readsCompleted = tokenizer.nextTokenAsLong();
-        long tmp_readsMerged = tokenizer.nextTokenAsLong();
-        long tmp_sectorsRead = tokenizer.nextTokenAsLong();
-        long tmp_timeReading = tokenizer.nextTokenAsLong();
-        if (tokenizer.hasMoreTokens()) {
+        long tmp_readsCompleted = st.nextTokenAsLong();
+        long tmp_readsMerged = st.nextTokenAsLong();
+        long tmp_sectorsRead = st.nextTokenAsLong();
+        long tmp_timeReading = st.nextTokenAsLong();
+        if (st.hasMoreTokens()) {
           // If we are on 2.6 then we might only have 4 longs; if so ignore this line
           // Otherwise we should have 11 long tokens.
-          long tmp_writesCompleted = tokenizer.nextTokenAsLong();
-          long tmp_writesMerged = tokenizer.nextTokenAsLong();
-          long tmp_sectorsWritten = tokenizer.nextTokenAsLong();
-          long tmp_timeWriting = tokenizer.nextTokenAsLong();
-          long tmp_iosInProgress = tokenizer.nextTokenAsLong();
-          long tmp_timeIosInProgress = tokenizer.nextTokenAsLong();
-          long tmp_ioTime = tokenizer.nextTokenAsLong();
+          long tmp_writesCompleted = st.nextTokenAsLong();
+          long tmp_writesMerged = st.nextTokenAsLong();
+          long tmp_sectorsWritten = st.nextTokenAsLong();
+          long tmp_timeWriting = st.nextTokenAsLong();
+          long tmp_iosInProgress = st.nextTokenAsLong();
+          long tmp_timeIosInProgress = st.nextTokenAsLong();
+          long tmp_ioTime = st.nextTokenAsLong();
           readsCompleted += tmp_readsCompleted;
           readsMerged += tmp_readsMerged;
           sectorsRead += tmp_sectorsRead;
@@ -598,7 +599,7 @@ public class LinuxProcFsStatistics {
       // NoSuchElementException line=" + line, nsee);
     } catch (IOException ioe) {
     } finally {
-      tokenizer.releaseResources();
+      st.releaseResources();
       if (br != null)
         try {
           br.close();
@@ -707,8 +708,8 @@ public class LinuxProcFsStatistics {
     }
 
     public int[] calculateStats(String newStatLine) {
-      tokenizer.setString(newStatLine);
-      tokenizer.skipToken(); // cpu name
+      st.setString(newStatLine);
+      st.skipToken(); // cpu name
       final int MAX_CPU_STATS = CPU.values().length;
       /*
        * newer kernels now have 10 columns for cpu in /proc/stat. This number may increase even
@@ -721,8 +722,8 @@ public class LinuxProcFsStatistics {
       int actualCpuStats = 0;
       long unaccountedCpuUtilization = 0;
 
-      while (tokenizer.hasMoreTokens()) {
-        newStats.add(tokenizer.nextTokenAsLong());
+      while (st.hasMoreTokens()) {
+        newStats.add(st.nextTokenAsLong());
         actualCpuStats++;
       }
 
diff --git a/geode-core/src/main/java/org/apache/geode/internal/stats50/VMStats50.java b/geode-core/src/main/java/org/apache/geode/internal/stats50/VMStats50.java
index 0c420d6..cb5782c 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/stats50/VMStats50.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/stats50/VMStats50.java
@@ -232,7 +232,7 @@ public class VMStats50 implements VMStatsContract {
     sds.add(f.createLongCounter("unloadedClasses",
         "Total number of classes unloaded since vm started.", "classes", true));
     sds.add(f.createLongGauge("freeMemory",
-        "An approximation of the total amount of memory currently available for future allocated objects, measured in bytes.",
+        "An approximation fo the total amount of memory currently available for future allocated objects, measured in bytes.",
         "bytes", true));
     sds.add(f.createLongGauge("totalMemory",
         "The total amount of memory currently available for current and future objects, measured in bytes.",
diff --git a/geode-core/src/main/java/org/apache/geode/internal/net/Buffers.java b/geode-core/src/main/java/org/apache/geode/internal/tcp/Buffers.java
similarity index 54%
rename from geode-core/src/main/java/org/apache/geode/internal/net/Buffers.java
rename to geode-core/src/main/java/org/apache/geode/internal/tcp/Buffers.java
index c77803d..b0f5612 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/net/Buffers.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/tcp/Buffers.java
@@ -12,11 +12,12 @@
  * or implied. See the License for the specific language governing permissions and limitations under
  * the License.
  */
-package org.apache.geode.internal.net;
+package org.apache.geode.internal.tcp;
 
 import java.lang.ref.SoftReference;
 import java.nio.ByteBuffer;
 import java.util.IdentityHashMap;
+import java.util.Iterator;
 import java.util.concurrent.ConcurrentLinkedQueue;
 
 import org.apache.geode.annotations.internal.MakeNotStatic;
@@ -25,46 +26,29 @@ import org.apache.geode.internal.Assert;
 
 public class Buffers {
   /**
-   * Buffers may be acquired from the Buffers pool
-   * or they may be allocated using Buffer.allocate(). This enum is used
-   * to note the different types. Tracked buffers come from the Buffers pool
-   * and need to be released when we're done using them.
-   */
-  public enum BufferType {
-    UNTRACKED, TRACKED_SENDER, TRACKED_RECEIVER
-  }
-
-  /**
    * A list of soft references to byte buffers.
    */
-  @MakeNotStatic
-  private static final ConcurrentLinkedQueue<BBSoftReference> bufferQueue =
-      new ConcurrentLinkedQueue<>();
-
-  /**
-   * use direct ByteBuffers instead of heap ByteBuffers for NIO operations
-   */
-  public static final boolean useDirectBuffers = !Boolean.getBoolean("p2p.nodirectBuffers");
+  private static final ConcurrentLinkedQueue bufferQueue = new ConcurrentLinkedQueue();
 
   /**
    * Should only be called by threads that have currently acquired send permission.
    *
    * @return a byte buffer to be used for sending on this connection.
    */
-  public static ByteBuffer acquireSenderBuffer(int size, DMStats stats) {
+  static ByteBuffer acquireSenderBuffer(int size, DMStats stats) {
     return acquireBuffer(size, stats, true);
   }
 
-  public static ByteBuffer acquireReceiveBuffer(int size, DMStats stats) {
+  static ByteBuffer acquireReceiveBuffer(int size, DMStats stats) {
     return acquireBuffer(size, stats, false);
   }
 
-  private static ByteBuffer acquireBuffer(int size, DMStats stats, boolean send) {
+  static ByteBuffer acquireBuffer(int size, DMStats stats, boolean send) {
     ByteBuffer result;
-    if (useDirectBuffers) {
+    if (TCPConduit.useDirectBuffers) {
       IdentityHashMap<BBSoftReference, BBSoftReference> alreadySeen = null; // keys are used like a
                                                                             // set
-      BBSoftReference ref = bufferQueue.poll();
+      BBSoftReference ref = (BBSoftReference) bufferQueue.poll();
       while (ref != null) {
         ByteBuffer bb = ref.getBB();
         if (bb == null) {
@@ -85,7 +69,7 @@ public class Buffers {
           // wasn't big enough so put it back in the queue
           Assert.assertTrue(bufferQueue.offer(ref));
           if (alreadySeen == null) {
-            alreadySeen = new IdentityHashMap<>();
+            alreadySeen = new IdentityHashMap<BBSoftReference, BBSoftReference>();
           }
           if (alreadySeen.put(ref, ref) != null) {
             // if it returns non-null then we have already seen this item
@@ -94,7 +78,7 @@ public class Buffers {
             break;
           }
         }
-        ref = bufferQueue.poll();
+        ref = (BBSoftReference) bufferQueue.poll();
       }
       result = ByteBuffer.allocateDirect(size);
     } else {
@@ -102,89 +86,26 @@ public class Buffers {
       result = ByteBuffer.allocate(size);
     }
     if (send) {
-      stats.incSenderBufferSize(size, useDirectBuffers);
+      stats.incSenderBufferSize(size, TCPConduit.useDirectBuffers);
     } else {
-      stats.incReceiverBufferSize(size, useDirectBuffers);
+      stats.incReceiverBufferSize(size, TCPConduit.useDirectBuffers);
     }
     return result;
   }
 
-  public static void releaseSenderBuffer(ByteBuffer bb, DMStats stats) {
+  static void releaseSenderBuffer(ByteBuffer bb, DMStats stats) {
     releaseBuffer(bb, stats, true);
   }
 
-  public static void releaseReceiveBuffer(ByteBuffer bb, DMStats stats) {
+  static void releaseReceiveBuffer(ByteBuffer bb, DMStats stats) {
     releaseBuffer(bb, stats, false);
   }
 
   /**
-   * expand a buffer that's currently being read from
-   */
-  static ByteBuffer expandReadBufferIfNeeded(BufferType type, ByteBuffer existing,
-      int desiredCapacity, DMStats stats) {
-    if (existing.capacity() >= desiredCapacity) {
-      if (existing.position() > 0) {
-        existing.compact();
-        existing.flip();
-      }
-      return existing;
-    }
-    ByteBuffer newBuffer = acquireBuffer(type, desiredCapacity, stats);
-    newBuffer.clear();
-    newBuffer.put(existing);
-    newBuffer.flip();
-    releaseBuffer(type, existing, stats);
-    return newBuffer;
-  }
-
-  /**
-   * expand a buffer that's currently being written to
-   */
-  static ByteBuffer expandWriteBufferIfNeeded(BufferType type, ByteBuffer existing,
-      int desiredCapacity, DMStats stats) {
-    if (existing.capacity() >= desiredCapacity) {
-      return existing;
-    }
-    ByteBuffer newBuffer = acquireBuffer(type, desiredCapacity, stats);
-    newBuffer.clear();
-    existing.flip();
-    newBuffer.put(existing);
-    releaseBuffer(type, existing, stats);
-    return newBuffer;
-  }
-
-  static ByteBuffer acquireBuffer(Buffers.BufferType type, int capacity, DMStats stats) {
-    switch (type) {
-      case UNTRACKED:
-        return ByteBuffer.allocate(capacity);
-      case TRACKED_SENDER:
-        return Buffers.acquireSenderBuffer(capacity, stats);
-      case TRACKED_RECEIVER:
-        return Buffers.acquireReceiveBuffer(capacity, stats);
-    }
-    throw new IllegalArgumentException("Unexpected buffer type " + type.toString());
-  }
-
-  static void releaseBuffer(Buffers.BufferType type, ByteBuffer buffer, DMStats stats) {
-    switch (type) {
-      case UNTRACKED:
-        return;
-      case TRACKED_SENDER:
-        Buffers.releaseSenderBuffer(buffer, stats);
-        return;
-      case TRACKED_RECEIVER:
-        Buffers.releaseReceiveBuffer(buffer, stats);
-        return;
-    }
-    throw new IllegalArgumentException("Unexpected buffer type " + type.toString());
-  }
-
-
-  /**
    * Releases a previously acquired buffer.
    */
-  private static void releaseBuffer(ByteBuffer bb, DMStats stats, boolean send) {
-    if (useDirectBuffers) {
+  static void releaseBuffer(ByteBuffer bb, DMStats stats, boolean send) {
+    if (TCPConduit.useDirectBuffers) {
       BBSoftReference bbRef = new BBSoftReference(bb, send);
       bufferQueue.offer(bbRef);
     } else {
@@ -197,8 +118,11 @@ public class Buffers {
   }
 
   public static void initBufferStats(DMStats stats) { // fixes 46773
-    if (useDirectBuffers) {
-      for (BBSoftReference ref : bufferQueue) {
+    if (TCPConduit.useDirectBuffers) {
+      @SuppressWarnings("unchecked")
+      Iterator<BBSoftReference> it = (Iterator<BBSoftReference>) bufferQueue.iterator();
+      while (it.hasNext()) {
+        BBSoftReference ref = it.next();
         if (ref.getBB() != null) {
           if (ref.getSend()) { // fix bug 46773
             stats.incSenderBufferSize(ref.getSize(), true);
@@ -219,7 +143,7 @@ public class Buffers {
     private int size;
     private final boolean send;
 
-    BBSoftReference(ByteBuffer bb, boolean send) {
+    public BBSoftReference(ByteBuffer bb, boolean send) {
       super(bb);
       this.size = bb.capacity();
       this.send = send;
@@ -229,7 +153,7 @@ public class Buffers {
       return this.size;
     }
 
-    synchronized int consumeSize() {
+    public synchronized int consumeSize() {
       int result = this.size;
       this.size = 0;
       return result;
@@ -240,7 +164,7 @@ public class Buffers {
     }
 
     public ByteBuffer getBB() {
-      return super.get();
+      return (ByteBuffer) super.get();
     }
   }
 
diff --git a/geode-core/src/main/java/org/apache/geode/internal/tcp/Connection.java b/geode-core/src/main/java/org/apache/geode/internal/tcp/Connection.java
index 247819a..47e90a2 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/tcp/Connection.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/tcp/Connection.java
@@ -16,11 +16,16 @@ package org.apache.geode.internal.tcp;
 
 import static org.apache.geode.distributed.ConfigurationProperties.SECURITY_PEER_AUTH_INIT;
 
+import java.io.BufferedInputStream;
+import java.io.ByteArrayOutputStream;
 import java.io.DataInputStream;
+import java.io.DataOutputStream;
 import java.io.IOException;
 import java.io.InputStream;
 import java.io.InterruptedIOException;
+import java.io.OutputStream;
 import java.net.ConnectException;
+import java.net.Inet6Address;
 import java.net.InetSocketAddress;
 import java.net.Socket;
 import java.net.SocketException;
@@ -40,9 +45,6 @@ import java.util.concurrent.Semaphore;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicLong;
 
-import javax.net.ssl.SSLEngine;
-import javax.net.ssl.SSLException;
-
 import org.apache.logging.log4j.Logger;
 
 import org.apache.geode.CancelException;
@@ -70,6 +72,7 @@ import org.apache.geode.distributed.internal.direct.DirectChannel;
 import org.apache.geode.distributed.internal.membership.InternalDistributedMember;
 import org.apache.geode.distributed.internal.membership.MembershipManager;
 import org.apache.geode.internal.Assert;
+import org.apache.geode.internal.ByteArrayDataInput;
 import org.apache.geode.internal.DSFIDFactory;
 import org.apache.geode.internal.InternalDataSerializer;
 import org.apache.geode.internal.SystemTimer;
@@ -78,9 +81,6 @@ import org.apache.geode.internal.Version;
 import org.apache.geode.internal.alerting.AlertingAction;
 import org.apache.geode.internal.logging.LogService;
 import org.apache.geode.internal.logging.LoggingThread;
-import org.apache.geode.internal.net.Buffers;
-import org.apache.geode.internal.net.NioFilter;
-import org.apache.geode.internal.net.NioPlainEngine;
 import org.apache.geode.internal.net.SocketCreator;
 import org.apache.geode.internal.tcp.MsgReader.Header;
 import org.apache.geode.internal.util.concurrent.ReentrantSemaphore;
@@ -96,26 +96,28 @@ public class Connection implements Runnable {
   public static final String THREAD_KIND_IDENTIFIER = "P2P message reader";
 
   @MakeNotStatic
+  private static final int INITIAL_CAPACITY =
+      Integer.getInteger("p2p.readerBufferSize", 32768).intValue();
   private static int P2P_CONNECT_TIMEOUT;
   @MakeNotStatic
   private static boolean IS_P2P_CONNECT_TIMEOUT_INITIALIZED = false;
 
-  static final int NORMAL_MSG_TYPE = 0x4c;
-  static final int CHUNKED_MSG_TYPE = 0x4d; // a chunk of one logical msg
-  static final int END_CHUNKED_MSG_TYPE = 0x4e; // last in a series of chunks
-  static final int DIRECT_ACK_BIT = 0x20;
+  public static final int NORMAL_MSG_TYPE = 0x4c;
+  public static final int CHUNKED_MSG_TYPE = 0x4d; // a chunk of one logical msg
+  public static final int END_CHUNKED_MSG_TYPE = 0x4e; // last in a series of chunks
+  public static final int DIRECT_ACK_BIT = 0x20;
 
-  static final int MSG_HEADER_SIZE_OFFSET = 0;
-  static final int MSG_HEADER_TYPE_OFFSET = 4;
-  static final int MSG_HEADER_ID_OFFSET = 5;
-  static final int MSG_HEADER_BYTES = 7;
+  public static final int MSG_HEADER_SIZE_OFFSET = 0;
+  public static final int MSG_HEADER_TYPE_OFFSET = 4;
+  public static final int MSG_HEADER_ID_OFFSET = 5;
+  public static final int MSG_HEADER_BYTES = 7;
 
   /**
    * Small buffer used for send socket buffer on receiver connections and receive buffer on sender
    * connections.
    */
   public static final int SMALL_BUFFER_SIZE =
-      Integer.getInteger(DistributionConfig.GEMFIRE_PREFIX + "SMALL_BUFFER_SIZE", 4096);
+      Integer.getInteger(DistributionConfig.GEMFIRE_PREFIX + "SMALL_BUFFER_SIZE", 4096).intValue();
 
   /** counter to give connections a unique id */
   @MakeNotStatic
@@ -129,7 +131,6 @@ public class Connection implements Runnable {
   private final ConnectionTable owner;
 
   private final TCPConduit conduit;
-  private NioFilter ioFilter;
 
   /**
    * Set to false once run() is terminating. Using this instead of Thread.isAlive as the reader
@@ -147,12 +148,7 @@ public class Connection implements Runnable {
   /** The idle timeout timer task for this connection */
   private SystemTimerTask idleTask;
 
-  private static final ThreadLocal<Boolean> isReaderThread = new ThreadLocal<Boolean>() {
-    @Override
-    public Boolean initialValue() {
-      return Boolean.FALSE;
-    }
-  };
+  private static final ThreadLocal isReaderThread = new ThreadLocal();
 
   public static void makeReaderThread() {
     // mark this thread as a reader thread
@@ -164,8 +160,13 @@ public class Connection implements Runnable {
   }
 
   // return true if this thread is a reader thread
-  private static boolean isReaderThread() {
-    return isReaderThread.get();
+  public static boolean isReaderThread() {
+    Object o = isReaderThread.get();
+    if (o == null) {
+      return false;
+    } else {
+      return ((Boolean) o).booleanValue();
+    }
   }
 
   private int getP2PConnectTimeout() {
@@ -188,15 +189,10 @@ public class Connection implements Runnable {
   private static final boolean DOMINO_THREAD_OWNED_SOCKETS =
       Boolean.getBoolean("p2p.ENABLE_DOMINO_THREAD_OWNED_SOCKETS");
 
-  private static final ThreadLocal<Boolean> isDominoThread = new ThreadLocal<Boolean>() {
-    @Override
-    public Boolean initialValue() {
-      return Boolean.FALSE;
-    }
-  };
+  private static final ThreadLocal isDominoThread = new ThreadLocal();
 
   // return true if this thread is a reader thread
-  private static boolean tipDomino() {
+  public static boolean tipDomino() {
     if (DOMINO_THREAD_OWNED_SOCKETS) {
       // mark this thread as one who wants to send ALL on TO sockets
       ConnectionTable.threadWantsOwnResources();
@@ -208,17 +204,25 @@ public class Connection implements Runnable {
   }
 
   public static boolean isDominoThread() {
-    return isDominoThread.get();
+    Object o = isDominoThread.get();
+    if (o == null) {
+      return false;
+    } else {
+      return ((Boolean) o).booleanValue();
+    }
   }
 
   /** the socket entrusted to this connection */
   private final Socket socket;
 
+  /** the non-NIO output stream */
+  OutputStream output;
+
   /** output stream/channel lock */
   private final Object outLock = new Object();
 
   /** the ID string of the conduit (for logging) */
-  private String conduitIdStr;
+  String conduitIdStr;
 
   /** Identifies the java group member on the other side of the connection. */
   InternalDistributedMember remoteAddr;
@@ -226,7 +230,7 @@ public class Connection implements Runnable {
   /**
    * Identifies the version of the member on the other side of the connection.
    */
-  private Version remoteVersion;
+  Version remoteVersion;
 
   /**
    * True if this connection was accepted by a listening socket. This makes it a receiver. False if
@@ -289,16 +293,16 @@ public class Connection implements Runnable {
   /**
    * Number of bytes in the outgoingQueue. Used to control capacity.
    */
-  private long queuedBytes;
+  private long queuedBytes = 0;
 
   /** used for async writes */
-  private Thread pusherThread;
+  Thread pusherThread;
 
   /**
    * The maximum number of concurrent senders sending a message to a single recipient.
    */
   private static final int MAX_SENDERS = Integer
-      .getInteger("p2p.maxConnectionSenders", DirectChannel.DEFAULT_CONCURRENCY_LEVEL);
+      .getInteger("p2p.maxConnectionSenders", DirectChannel.DEFAULT_CONCURRENCY_LEVEL).intValue();
   /**
    * This semaphore is used to throttle how many threads will try to do sends on this connection
    * concurrently. A thread must acquire this semaphore before it is allowed to start serializing
@@ -307,10 +311,10 @@ public class Connection implements Runnable {
   private final Semaphore senderSem = new ReentrantSemaphore(MAX_SENDERS);
 
   /** Set to true once the handshake has been read */
-  private volatile boolean handshakeRead;
-  private volatile boolean handshakeCancelled;
+  volatile boolean handshakeRead = false;
+  volatile boolean handshakeCancelled = false;
 
-  private volatile int replyCode;
+  private volatile int replyCode = 0;
 
   private static final byte REPLY_CODE_OK = (byte) 69;
   private static final byte REPLY_CODE_OK_WITH_ASYNC_INFO = (byte) 70;
@@ -326,7 +330,7 @@ public class Connection implements Runnable {
   /** set to true once a close begins */
   private final AtomicBoolean closing = new AtomicBoolean(false);
 
-  private volatile boolean readerShuttingDown = false;
+  volatile boolean readerShuttingDown = false;
 
   /** whether the socket is connected */
   volatile boolean connected = false;
@@ -334,10 +338,10 @@ public class Connection implements Runnable {
   /**
    * Set to true once a connection finishes its constructor
    */
-  private volatile boolean finishedConnecting = false;
+  volatile boolean finishedConnecting = false;
 
-  private volatile boolean accessed = true;
-  private volatile boolean socketInUse = false;
+  volatile boolean accessed = true;
+  volatile boolean socketInUse = false;
   volatile boolean timedOut = false;
 
   /**
@@ -349,7 +353,7 @@ public class Connection implements Runnable {
    * millisecond clock at the time message transmission started, if doing forced-disconnect
    * processing
    */
-  private long transmissionStartTime;
+  long transmissionStartTime;
 
   /** ack wait timeout - if socketInUse, use this to trigger SUSPECT processing */
   private long ackWaitTimeout;
@@ -361,42 +365,38 @@ public class Connection implements Runnable {
    * other connections participating in the current transmission. we notify them if ackSATimeout
    * expires to keep all members from generating alerts when only one is slow
    */
-  private List ackConnectionGroup;
+  List ackConnectionGroup;
 
   /** name of thread that we're currently performing an operation in (may be null) */
-  private String ackThreadName;
+  String ackThreadName;
 
-  /** the buffer used for message receipt */
-  private ByteBuffer inputBuffer;
+  /** the buffer used for NIO message receipt */
+  ByteBuffer nioInputBuffer;
 
   /** the length of the next message to be dispatched */
-  private int messageLength;
+  int nioMessageLength;
 
   /** the type of message being received */
-  private byte messageType;
-
-  /**
-   * when messages are chunked by a MsgStreamer we track the destreamers on
-   * the receiving side using a message identifier
-   */
-  private short messageId;
-
-  /** whether the length of the next message has been established */
-  private boolean lengthSet = false;
+  byte nioMessageType;
 
   /** used to lock access to destreamer data */
   private final Object destreamerLock = new Object();
 
   /** caches a msg destreamer that is currently not being used */
-  private MsgDestreamer idleMsgDestreamer;
+  MsgDestreamer idleMsgDestreamer;
 
   /**
-   * used to map a msgId to a MsgDestreamer which are used for destreaming chunked messages
+   * used to map a msgId to a MsgDestreamer which are used for destreaming chunked messages using
+   * nio
    */
-  private HashMap destreamerMap;
+  HashMap destreamerMap;
+
+  boolean directAck;
 
-  private boolean directAck;
+  short nioMsgId;
 
+  /** whether the length of the next message has been established */
+  boolean nioLengthSet = false;
 
   /** is this connection used for serial message delivery? */
   boolean preserveOrder = false;
@@ -429,7 +429,16 @@ public class Connection implements Runnable {
     setSocketBufferSize(sock, false, requestedSize);
   }
 
+  public int getReceiveBufferSize() {
+    return recvBufferSize;
+  }
+
   private void setSocketBufferSize(Socket sock, boolean send, int requestedSize) {
+    setSocketBufferSize(sock, send, requestedSize, false);
+  }
+
+  private void setSocketBufferSize(Socket sock, boolean send, int requestedSize,
+      boolean alreadySetInSocket) {
     if (requestedSize > 0) {
       try {
         int currentSize = send ? sock.getSendBufferSize() : sock.getReceiveBufferSize();
@@ -439,10 +448,13 @@ public class Connection implements Runnable {
           }
           return;
         }
-        if (send) {
-          sock.setSendBufferSize(requestedSize);
+        if (!alreadySetInSocket) {
+          if (send) {
+            sock.setSendBufferSize(requestedSize);
+          } else {
+            sock.setReceiveBufferSize(requestedSize);
+          }
         } else {
-          sock.setReceiveBufferSize(requestedSize);
         }
       } catch (SocketException ignore) {
       }
@@ -456,7 +468,7 @@ public class Connection implements Runnable {
         if (actualSize < requestedSize) {
           logger.info("Socket {} is {} instead of the requested {}.",
               (send ? "send buffer size" : "receive buffer size"),
-              actualSize, requestedSize);
+              Integer.valueOf(actualSize), Integer.valueOf(requestedSize));
         } else if (actualSize > requestedSize) {
           if (logger.isTraceEnabled()) {
             logger.trace("Socket {} buffer size is {} instead of the requested {}",
@@ -483,7 +495,7 @@ public class Connection implements Runnable {
   /**
    * Returns the size of the send buffer on this connection's socket.
    */
-  int getSendBufferSize() {
+  public int getSendBufferSize() {
     int result = this.sendBufferSize;
     if (result != -1) {
       return result;
@@ -499,12 +511,32 @@ public class Connection implements Runnable {
   }
 
   /**
-   * creates a "reader" connection that we accepted (it was initiated by an explicit connect being
-   * done on
+   * creates a connection that we accepted (it was initiated by an explicit connect being done on
+   * the other side). We will only receive data on this socket; never send.
+   */
+  protected static Connection createReceiver(ConnectionTable table, Socket socket)
+      throws IOException, ConnectionException {
+    Connection connection = new Connection(table, socket);
+    boolean readerStarted = false;
+    try {
+      connection.startReader(table);
+      readerStarted = true;
+    } finally {
+      if (!readerStarted) {
+        connection.closeForReconnect(
+            "could not start reader thread");
+      }
+    }
+    connection.waitForHandshake();
+    connection.finishedConnecting = true;
+    return connection;
+  }
+
+  /**
+   * creates a connection that we accepted (it was initiated by an explicit connect being done on
    * the other side).
    */
-  protected Connection(ConnectionTable t, Socket socket)
-      throws ConnectionException {
+  protected Connection(ConnectionTable t, Socket socket) throws IOException, ConnectionException {
     if (t == null) {
       throw new IllegalArgumentException(
           "Null ConnectionTable");
@@ -527,10 +559,22 @@ public class Connection implements Runnable {
       // unable to get the settings we want. Don't log an error because it will
       // likely happen a lot
     }
+    if (!useNIO()) {
+      try {
+        // this.output = new BufferedOutputStream(socket.getOutputStream(), SMALL_BUFFER_SIZE);
+        this.output = socket.getOutputStream();
+      } catch (IOException io) {
+        logger.fatal("Unable to get P2P connection streams", io);
+        t.getSocketCloser().asyncClose(socket, this.remoteAddr.toString(), null);
+        throw io;
+      }
+    }
   }
 
-  void initReceiver() {
+  protected void initReceiver() {
     this.startReader(owner);
+    this.waitForHandshake();
+    this.finishedConnecting = true;
   }
 
   void setIdleTimeoutTask(SystemTimerTask task) {
@@ -541,7 +585,7 @@ public class Connection implements Runnable {
   /**
    * Returns true if an idle connection was detected.
    */
-  boolean checkForIdleTimeout() {
+  public boolean checkForIdleTimeout() {
     if (isSocketClosed()) {
       return true;
     }
@@ -578,7 +622,9 @@ public class Connection implements Runnable {
   }
 
   @MakeNotStatic
-  private static final ByteBuffer okHandshakeBuf;
+  private static byte[] okHandshakeBytes;
+  @MakeNotStatic
+  private static ByteBuffer okHandshakeBuf;
   static {
     int msglen = 1; // one byte for reply code
     byte[] bytes = new byte[MSG_HEADER_BYTES + msglen];
@@ -593,13 +639,14 @@ public class Connection implements Runnable {
     bytes[MSG_HEADER_BYTES] = REPLY_CODE_OK;
     int allocSize = bytes.length;
     ByteBuffer bb;
-    if (Buffers.useDirectBuffers) {
+    if (TCPConduit.useDirectBuffers) {
       bb = ByteBuffer.allocateDirect(allocSize);
     } else {
       bb = ByteBuffer.allocate(allocSize);
     }
     bb.put(bytes);
     okHandshakeBuf = bb;
+    okHandshakeBytes = bytes;
   }
 
   /**
@@ -607,37 +654,38 @@ public class Connection implements Runnable {
    */
   public static final int MAX_MSG_SIZE = 0x00ffffff;
 
-  static int calcHdrSize(int byteSize) {
+  public static int calcHdrSize(int byteSize) {
     if (byteSize > MAX_MSG_SIZE) {
       throw new IllegalStateException(String.format("tcp message exceeded max size of %s",
-          MAX_MSG_SIZE));
+          Integer.valueOf(MAX_MSG_SIZE)));
     }
     int hdrSize = byteSize;
     hdrSize |= (HANDSHAKE_VERSION << 24);
     return hdrSize;
   }
 
-  static int calcMsgByteSize(int hdrSize) {
+  public static int calcMsgByteSize(int hdrSize) {
     return hdrSize & MAX_MSG_SIZE;
   }
 
-  static byte calcHdrVersion(int hdrSize) throws IOException {
+  public static byte calcHdrVersion(int hdrSize) throws IOException {
     byte ver = (byte) (hdrSize >> 24);
     if (ver != HANDSHAKE_VERSION) {
       throw new IOException(
           String.format(
               "Detected wrong version of GemFire product during handshake. Expected %s but found %s",
-              HANDSHAKE_VERSION, ver));
+              new Object[] {new Byte(HANDSHAKE_VERSION), new Byte(ver)}));
     }
     return ver;
   }
 
   private void sendOKHandshakeReply() throws IOException, ConnectionException {
-    ByteBuffer my_okHandshakeBuf;
+    byte[] my_okHandshakeBytes = null;
+    ByteBuffer my_okHandshakeBuf = null;
     if (this.isReceiver) {
       DistributionConfig cfg = owner.getConduit().config;
       ByteBuffer bb;
-      if (Buffers.useDirectBuffers) {
+      if (useNIO() && TCPConduit.useDirectBuffers) {
         bb = ByteBuffer.allocateDirect(128);
       } else {
         bb = ByteBuffer.allocate(128);
@@ -653,17 +701,35 @@ public class Connection implements Runnable {
       Version.writeOrdinal(bb, Version.CURRENT.ordinal(), true);
       // now set the msg length into position 0
       bb.putInt(0, calcHdrSize(bb.position() - MSG_HEADER_BYTES));
-      my_okHandshakeBuf = bb;
-      bb.flip();
+      if (useNIO()) {
+        my_okHandshakeBuf = bb;
+        bb.flip();
+      } else {
+        my_okHandshakeBytes = new byte[bb.position()];
+        bb.flip();
+        bb.get(my_okHandshakeBytes);
+      }
     } else {
       my_okHandshakeBuf = okHandshakeBuf;
+      my_okHandshakeBytes = okHandshakeBytes;
+    }
+    if (useNIO()) {
+      assert my_okHandshakeBuf != null;
+      synchronized (my_okHandshakeBuf) {
+        my_okHandshakeBuf.position(0);
+        nioWriteFully(getSocket().getChannel(), my_okHandshakeBuf, false, null);
+      }
+    } else {
+      synchronized (outLock) {
+        assert my_okHandshakeBytes != null;
+        this.output.write(my_okHandshakeBytes, 0, my_okHandshakeBytes.length);
+        this.output.flush();
+      }
     }
-    my_okHandshakeBuf.position(0);
-    writeFully(getSocket().getChannel(), my_okHandshakeBuf, false, null);
   }
 
   private static final int HANDSHAKE_TIMEOUT_MS =
-      Integer.getInteger("p2p.handshakeTimeoutMs", 59000);
+      Integer.getInteger("p2p.handshakeTimeoutMs", 59000).intValue();
   // private static final byte HANDSHAKE_VERSION = 1; // 501
   // public static final byte HANDSHAKE_VERSION = 2; // cbb5x_PerfScale
   // public static final byte HANDSHAKE_VERSION = 3; // durable_client
@@ -673,7 +739,7 @@ public class Connection implements Runnable {
   // NOTICE: handshake_version should not be changed anymore. Use the gemfire
   // version transmitted with the handshake bits and handle old handshakes
   // based on that
-  private static final byte HANDSHAKE_VERSION = 7; // product version exchange during handshake
+  public static final byte HANDSHAKE_VERSION = 7; // product version exchange during handshake
 
   /**
    * @throws ConnectionException if the conduit has stopped
@@ -709,7 +775,7 @@ public class Connection implements Runnable {
                     String.format(
                         "Connection handshake with %s timed out after waiting %s milliseconds.",
 
-                        peerName, HANDSHAKE_TIMEOUT_MS));
+                        peerName, Integer.valueOf(HANDSHAKE_TIMEOUT_MS)));
               } else {
                 peerName = "socket " + this.socket.getRemoteSocketAddress().toString() + ":"
                     + this.socket.getPort();
@@ -717,7 +783,7 @@ public class Connection implements Runnable {
               throw new ConnectionException(
                   String.format(
                       "Connection handshake with %s timed out after waiting %s milliseconds.",
-                      peerName, HANDSHAKE_TIMEOUT_MS));
+                      peerName, Integer.valueOf(HANDSHAKE_TIMEOUT_MS)));
             } else {
               success = this.handshakeRead;
             }
@@ -771,27 +837,24 @@ public class Connection implements Runnable {
   /**
    * asynchronously close this connection
    *
-   * @param beingSickForTests test hook to simulate sickness in communications & membership
+   * @param beingSick test hook to simulate sickness in communications & membership
    */
-  private void asyncClose(boolean beingSickForTests) {
+  private void asyncClose(boolean beingSick) {
     // note: remoteAddr may be null if this is a receiver that hasn't finished its handshake
 
     // we do the close in a background thread because the operation may hang if
     // there is a problem with the network. See bug #46659
 
-    releaseInputBuffer();
-
     // if simulating sickness, sockets must be closed in-line so that tests know
     // that the vm is sick when the beSick operation completes
-    if (beingSickForTests) {
+    if (beingSick) {
       prepareForAsyncClose();
     } else {
       if (this.asyncCloseCalled.compareAndSet(false, true)) {
         Socket s = this.socket;
         if (s != null && !s.isClosed()) {
           prepareForAsyncClose();
-          this.owner.getSocketCloser().asyncClose(s, String.valueOf(this.remoteAddr),
-              () -> ioFilter.close(s.getChannel()));
+          this.owner.getSocketCloser().asyncClose(s, String.valueOf(this.remoteAddr), null);
         }
       }
     }
@@ -827,7 +890,7 @@ public class Connection implements Runnable {
     }
   }
 
-  private void handshakeFromNewSender() throws IOException {
+  private void handshakeNio() throws IOException {
     waitForAddressCompletion();
 
     InternalDistributedMember myAddr = this.owner.getConduit().getMemberId();
@@ -860,7 +923,42 @@ public class Connection implements Runnable {
     // }
     connectHandshake.setMessageHeader(NORMAL_MSG_TYPE, ClusterDistributionManager.STANDARD_EXECUTOR,
         MsgIdGenerator.NO_MSG_ID);
-    writeFully(getSocket().getChannel(), connectHandshake.getContentBuffer(), false, null);
+    nioWriteFully(getSocket().getChannel(), connectHandshake.getContentBuffer(), false, null);
+  }
+
+  private void handshakeStream() throws IOException {
+    waitForAddressCompletion();
+
+    this.output = getSocket().getOutputStream();
+    ByteArrayOutputStream baos = new ByteArrayOutputStream(CONNECT_HANDSHAKE_SIZE);
+    DataOutputStream os = new DataOutputStream(baos);
+    InternalDistributedMember myAddr = owner.getConduit().getMemberId();
+    os.writeByte(0);
+    os.writeByte(HANDSHAKE_VERSION);
+    // NOTE: if you add or remove code in this section bump HANDSHAKE_VERSION
+    InternalDataSerializer.invokeToData(myAddr, os);
+    os.writeBoolean(this.sharedResource);
+    os.writeBoolean(this.preserveOrder);
+    os.writeLong(this.uniqueId);
+    Version.CURRENT.writeOrdinal(os, true);
+    os.writeInt(dominoCount.get() + 1);
+    os.flush();
+
+    byte[] msg = baos.toByteArray();
+    int len = calcHdrSize(msg.length);
+    byte[] lenbytes = new byte[MSG_HEADER_BYTES];
+    lenbytes[MSG_HEADER_SIZE_OFFSET] = (byte) ((len / 0x1000000) & 0xff);
+    lenbytes[MSG_HEADER_SIZE_OFFSET + 1] = (byte) ((len / 0x10000) & 0xff);
+    lenbytes[MSG_HEADER_SIZE_OFFSET + 2] = (byte) ((len / 0x100) & 0xff);
+    lenbytes[MSG_HEADER_SIZE_OFFSET + 3] = (byte) (len & 0xff);
+    lenbytes[MSG_HEADER_TYPE_OFFSET] = (byte) NORMAL_MSG_TYPE;
+    lenbytes[MSG_HEADER_ID_OFFSET] = (byte) ((MsgIdGenerator.NO_MSG_ID >> 8) & 0xff);
+    lenbytes[MSG_HEADER_ID_OFFSET + 1] = (byte) (MsgIdGenerator.NO_MSG_ID & 0xff);
+    synchronized (outLock) {
+      this.output.write(lenbytes, 0, lenbytes.length);
+      this.output.write(msg, 0, msg.length);
+      this.output.flush();
+    }
   }
 
   /**
@@ -869,18 +967,20 @@ public class Connection implements Runnable {
    */
   private void attemptHandshake(ConnectionTable connTable) throws IOException {
     // send HANDSHAKE
-    // send this member's information. It's expected on the other side
-    if (logger.isDebugEnabled()) {
-      logger.debug("starting peer-to-peer handshake on socket {}", socket);
+    // send this server's port. It's expected on the other side
+    if (useNIO()) {
+      handshakeNio();
+    } else {
+      handshakeStream();
     }
-    handshakeFromNewSender();
+
     startReader(connTable); // this reader only reads the handshake and then exits
     waitForHandshake(); // waiting for reply
   }
 
   /** time between connection attempts */
   private static final int RECONNECT_WAIT_TIME = Integer
-      .getInteger(DistributionConfig.GEMFIRE_PREFIX + "RECONNECT_WAIT_TIME", 2000);
+      .getInteger(DistributionConfig.GEMFIRE_PREFIX + "RECONNECT_WAIT_TIME", 2000).intValue();
 
   /**
    * creates a new connection to a remote server. We are initiating this connection; the other side
@@ -988,8 +1088,7 @@ public class Connection implements Runnable {
             connectionErrorLogged = true; // otherwise change to use 100ms intervals causes a lot of
                                           // these
             logger.info("Connection: shared={} ordered={} failed to connect to peer {} because: {}",
-                sharedResource, preserveOrder, remoteAddr,
-                ioe.getCause() != null ? ioe.getCause() : ioe);
+                sharedResource, preserveOrder, remoteAddr, ioe);
           }
         } // IOException
         finally {
@@ -1016,7 +1115,10 @@ public class Connection implements Runnable {
             }
           } catch (ConnectionException e) {
             if (giveUpOnMember(mgr, remoteAddr)) {
-              throw new IOException("Handshake failed", e);
+              IOException ioe =
+                  new IOException("Handshake failed");
+              ioe.initCause(e);
+              throw ioe;
             }
             t.getConduit().getCancelCriterion().checkCancelInProgress(null);
             logger.info(
@@ -1115,69 +1217,80 @@ public class Connection implements Runnable {
 
     InetSocketAddress addr =
         new InetSocketAddress(remoteAddr.getInetAddress(), remoteAddr.getDirectChannelPort());
-    SocketChannel channel = SocketChannel.open();
-    this.owner.addConnectingSocket(channel.socket(), addr.getAddress());
-
-    try {
-      channel.socket().setTcpNoDelay(true);
-      channel.socket().setKeepAlive(SocketCreator.ENABLE_TCP_KEEP_ALIVE);
-
-      /*
-       * If conserve-sockets is false, the socket can be used for receiving responses, so set the
-       * receive buffer accordingly.
-       */
-      if (!sharedResource) {
-        setReceiveBufferSize(channel.socket(), this.owner.getConduit().tcpBufferSize);
-      } else {
-        setReceiveBufferSize(channel.socket(), SMALL_BUFFER_SIZE); // make small since only
-        // receive ack messages
-      }
-      setSendBufferSize(channel.socket());
-      channel.configureBlocking(true);
-
-      int connectTime = getP2PConnectTimeout();
-
+    if (useNIO()) {
+      SocketChannel channel = SocketChannel.open();
+      this.owner.addConnectingSocket(channel.socket(), addr.getAddress());
       try {
+        channel.socket().setTcpNoDelay(true);
 
-        channel.socket().connect(addr, connectTime);
+        channel.socket().setKeepAlive(SocketCreator.ENABLE_TCP_KEEP_ALIVE);
+
+        /*
+         * If conserve-sockets is false, the socket can be used for receiving responses, so set the
+         * receive buffer accordingly.
+         */
+        if (!sharedResource) {
+          setReceiveBufferSize(channel.socket(), this.owner.getConduit().tcpBufferSize);
+        } else {
+          setReceiveBufferSize(channel.socket(), SMALL_BUFFER_SIZE); // make small since only
+                                                                     // receive ack messages
+        }
+        setSendBufferSize(channel.socket());
+        channel.configureBlocking(true);
 
-        createIoFilter(channel, true);
+        int connectTime = getP2PConnectTimeout();
 
-      } catch (NullPointerException e) {
-        // bug #45044 - jdk 1.7 sometimes throws an NPE here
-        ConnectException c = new ConnectException("Encountered bug #45044 - retrying");
-        c.initCause(e);
-        // prevent a hot loop by sleeping a little bit
         try {
-          Thread.sleep(1000);
-        } catch (InterruptedException ie) {
-          Thread.currentThread().interrupt();
+          channel.socket().connect(addr, connectTime);
+        } catch (NullPointerException e) {
+          // bug #45044 - jdk 1.7 sometimes throws an NPE here
+          ConnectException c = new ConnectException("Encountered bug #45044 - retrying");
+          c.initCause(e);
+          // prevent a hot loop by sleeping a little bit
+          try {
+            Thread.sleep(1000);
+          } catch (InterruptedException ie) {
+            Thread.currentThread().interrupt();
+          }
+          throw c;
+        } catch (CancelledKeyException | ClosedSelectorException e) {
+          // bug #44469: for some reason NIO throws this runtime exception
+          // instead of an IOException on timeouts
+          ConnectException c = new ConnectException(
+              String.format("Attempt timed out after %s milliseconds",
+                  new Object[] {connectTime}));
+          c.initCause(e);
+          throw c;
         }
-        throw c;
-      } catch (SSLException e) {
-        ConnectException c = new ConnectException("Problem connecting to peer " + addr);
-        c.initCause(e);
-        throw c;
-      } catch (CancelledKeyException | ClosedSelectorException e) {
-        // bug #44469: for some reason NIO throws this runtime exception
-        // instead of an IOException on timeouts
-        ConnectException c = new ConnectException(
-            String.format("Attempt timed out after %s milliseconds",
-                connectTime));
-        c.initCause(e);
-        throw c;
+      } finally {
+        this.owner.removeConnectingSocket(channel.socket());
+      }
+      this.socket = channel.socket();
+    } else {
+      if (TCPConduit.useSSL) {
+        int socketBufferSize =
+            sharedResource ? SMALL_BUFFER_SIZE : this.owner.getConduit().tcpBufferSize;
+        this.socket = owner.getConduit().getSocketCreator().connectForServer(
+            remoteAddr.getInetAddress(), remoteAddr.getDirectChannelPort(), socketBufferSize);
+        // Set the receive buffer size local fields. It has already been set in the socket.
+        setSocketBufferSize(this.socket, false, socketBufferSize, true);
+        setSendBufferSize(this.socket);
+      } else {
+        Socket s = new Socket();
+        this.socket = s;
+        s.setTcpNoDelay(true);
+        s.setKeepAlive(SocketCreator.ENABLE_TCP_KEEP_ALIVE);
+        setReceiveBufferSize(s, SMALL_BUFFER_SIZE);
+        setSendBufferSize(s);
+        s.connect(addr, 0);
       }
-    } finally {
-      this.owner.removeConnectingSocket(channel.socket());
     }
-    this.socket = channel.socket();
-
     if (logger.isDebugEnabled()) {
       logger.debug("Connection: connected to {} with IP address {}", remoteAddr, addr);
     }
     try {
       getSocket().setTcpNoDelay(true);
-    } catch (SocketException ignored) {
+    } catch (SocketException e) {
     }
   }
 
@@ -1189,15 +1302,20 @@ public class Connection implements Runnable {
    */
   private static final boolean BATCH_SENDS = Boolean.getBoolean("p2p.batchSends");
   private static final int BATCH_BUFFER_SIZE =
-      Integer.getInteger("p2p.batchBufferSize", 1024 * 1024);
-  private static final int BATCH_FLUSH_MS = Integer.getInteger("p2p.batchFlushTime", 50);
-  private final Object batchLock = new Object();
+      Integer.getInteger("p2p.batchBufferSize", 1024 * 1024).intValue();
+  private static final int BATCH_FLUSH_MS = Integer.getInteger("p2p.batchFlushTime", 50).intValue();
+  private Object batchLock;
   private ByteBuffer fillBatchBuffer;
   private ByteBuffer sendBatchBuffer;
   private BatchBufferFlusher batchFlusher;
 
   private void createBatchSendBuffer() {
-    if (Buffers.useDirectBuffers) {
+    // batch send buffer isn't needed if old-io is being used
+    if (!this.useNIO) {
+      return;
+    }
+    this.batchLock = new Object();
+    if (TCPConduit.useDirectBuffers) {
       this.fillBatchBuffer = ByteBuffer.allocateDirect(BATCH_BUFFER_SIZE);
       this.sendBatchBuffer = ByteBuffer.allocateDirect(BATCH_BUFFER_SIZE);
     } else {
@@ -1208,7 +1326,7 @@ public class Connection implements Runnable {
     this.batchFlusher.start();
   }
 
-  void cleanUpOnIdleTaskCancel() {
+  public void cleanUpOnIdleTaskCancel() {
     // Make sure receivers are removed from the connection table, this should always be a noop, but
     // is done here as a failsafe.
     if (isReceiver) {
@@ -1216,17 +1334,13 @@ public class Connection implements Runnable {
     }
   }
 
-  public void setInputBuffer(ByteBuffer buffer) {
-    this.inputBuffer = buffer;
-  }
-
   private class BatchBufferFlusher extends Thread {
     private volatile boolean flushNeeded = false;
     private volatile boolean timeToStop = false;
     private DMStats stats;
 
 
-    BatchBufferFlusher() {
+    public BatchBufferFlusher() {
       setDaemon(true);
       this.stats = owner.getConduit().getStats();
     }
@@ -1234,7 +1348,7 @@ public class Connection implements Runnable {
     /**
      * Called when a message writer needs the current fillBatchBuffer flushed
      */
-    void flushBuffer(ByteBuffer bb) {
+    public void flushBuffer(ByteBuffer bb) {
       final long start = DistributionStats.getStatTime();
       try {
         synchronized (this) {
@@ -1303,7 +1417,7 @@ public class Connection implements Runnable {
                 try {
                   sendBatchBuffer.flip();
                   SocketChannel channel = getSocket().getChannel();
-                  writeFully(channel, sendBatchBuffer, false, null);
+                  nioWriteFully(channel, sendBatchBuffer, false, null);
                   sendBatchBuffer.clear();
                 } catch (IOException | ConnectionException ex) {
                   logger.fatal("Exception flushing batch send buffer: %s", ex);
@@ -1443,12 +1557,15 @@ public class Connection implements Runnable {
               }
             }
           }
+          if (logger.isDebugEnabled()) {
+            logger.debug("Closing socket for {}", this);
+          }
         } else if (!forceRemoval) {
           removeEndpoint = false;
         }
         // make sure our socket is closed
         asyncClose(false);
-        lengthSet = false;
+        nioLengthSet = false;
       } // synchronized
 
       // moved the call to notifyHandshakeWaiter out of the above
@@ -1547,9 +1664,6 @@ public class Connection implements Runnable {
 
   /** starts a reader thread */
   private void startReader(ConnectionTable connTable) {
-    if (logger.isDebugEnabled()) {
-      logger.debug("Starting thread for " + p2pReaderName());
-    }
     Assert.assertTrue(!this.isRunning);
     stopped = false;
     this.isRunning = true;
@@ -1568,27 +1682,29 @@ public class Connection implements Runnable {
     ConnectionTable.threadWantsSharedResources();
     makeReaderThread(this.isReceiver);
     try {
-      readMessages();
+      if (useNIO()) {
+        runNioReader();
+      } else {
+        runOioReader();
+      }
     } finally {
       // bug36060: do the socket close within a finally block
       if (logger.isDebugEnabled()) {
         logger.debug("Stopping {} for {}", p2pReaderName(), remoteAddr);
       }
+      initiateSuspicionIfSharedUnordered();
       if (this.isReceiver) {
-        try {
-          initiateSuspicionIfSharedUnordered();
-        } catch (CancelException e) {
-          // shutting down
-        }
         if (!this.sharedResource) {
           this.conduit.getStats().incThreadOwnedReceivers(-1L, dominoCount.get());
         }
         asyncClose(false);
         this.owner.removeAndCloseThreadOwnedSockets();
-
-        if (this.isSharedResource()) {
-          releaseInputBuffer();
-        }
+      }
+      ByteBuffer tmp = this.nioInputBuffer;
+      if (tmp != null) {
+        this.nioInputBuffer = null;
+        final DMStats stats = this.owner.getConduit().getStats();
+        Buffers.releaseReceiveBuffer(tmp, stats);
       }
       // make sure that if the reader thread exits we notify a thread waiting
       // for the handshake.
@@ -1602,21 +1718,10 @@ public class Connection implements Runnable {
     } // finally
   }
 
-  private void releaseInputBuffer() {
-    ByteBuffer tmp = this.inputBuffer;
-    if (tmp != null) {
-      this.inputBuffer = null;
-      final DMStats stats = this.owner.getConduit().getStats();
-      Buffers.releaseReceiveBuffer(tmp, stats);
-    }
-  }
-
   private String p2pReaderName() {
     StringBuilder sb = new StringBuilder(64);
     if (this.isReceiver) {
-      sb.append(THREAD_KIND_IDENTIFIER + "@");
-    } else if (this.handshakeRead) {
-      sb.append("P2P message sender@");
+      sb.append("P2P message reader@");
     } else {
       sb.append("P2P handshake reader@");
     }
@@ -1627,23 +1732,18 @@ public class Connection implements Runnable {
     return sb.toString();
   }
 
-  private void readMessages() {
+  private void runNioReader() {
     // take a snapshot of uniqueId to detect reconnect attempts; see bug 37592
-    SocketChannel channel;
+    SocketChannel channel = null;
     try {
       channel = getSocket().getChannel();
-      socket.setSoTimeout(0);
-      socket.setTcpNoDelay(true);
-      if (ioFilter == null) {
-        createIoFilter(channel, false);
-      }
       channel.configureBlocking(true);
     } catch (ClosedChannelException e) {
       // bug 37693: the channel was asynchronously closed. Our work
       // is done.
       try {
         requestClose(
-            "readMessages caught closed channel");
+            "runNioReader caught closed channel");
       } catch (Exception ignore) {
       }
       return; // exit loop and thread
@@ -1651,16 +1751,15 @@ public class Connection implements Runnable {
       if (stopped || owner.getConduit().getCancelCriterion().isCancelInProgress()) {
         try {
           requestClose(
-              "readMessages caught shutdown");
+              "runNioReader caught shutdown");
         } catch (Exception ignore) {
         }
         return; // bug37520: exit loop (and thread)
       }
-      logger.info("Failed initializing socket for message {}: {}",
-          (this.isReceiver ? "receiver" : "sender"), ex.getMessage());
+      logger.fatal("Failed setting channel to blocking mode {}", ex);
       this.readerShuttingDown = true;
       try {
-        requestClose(String.format("Failed initializing socket %s",
+        requestClose(String.format("Failed setting channel to blocking mode %s",
             ex));
       } catch (Exception ignore) {
       }
@@ -1670,16 +1769,13 @@ public class Connection implements Runnable {
     if (!stopped) {
       // Assert.assertTrue(owner != null, "How did owner become null");
       if (logger.isDebugEnabled()) {
-        logger.debug("Starting {} on {}", p2pReaderName(), socket);
+        logger.debug("Starting {}", p2pReaderName());
       }
     }
     // we should not change the state of the connection if we are a handshake reader thread
     // as there is a race between this thread and the application thread doing direct ack
     // fix for #40869
     boolean isHandShakeReader = false;
-    // if we're using SSL/TLS the input buffer may already have data to process
-    boolean skipInitialRead = getInputBuffer().position() > 0;
-    boolean isInitialRead = true;
     try {
       for (;;) {
         if (stopped) {
@@ -1690,7 +1786,6 @@ public class Connection implements Runnable {
           Socket s = this.socket;
           if (s != null) {
             try {
-              ioFilter.close(s.getChannel());
               s.close();
             } catch (IOException e) {
               // don't care
@@ -1703,45 +1798,36 @@ public class Connection implements Runnable {
         }
 
         try {
-          ByteBuffer buff = getInputBuffer();
+          ByteBuffer buff = getNIOBuffer();
           synchronized (stateLock) {
             connectionState = STATE_READING;
           }
-          int amountRead;
-          if (!isInitialRead) {
-            amountRead = channel.read(buff);
-          } else {
-            isInitialRead = false;
-            if (!skipInitialRead) {
-              amountRead = channel.read(buff);
-            } else {
-              amountRead = buff.position();
-            }
-          }
+          int amt = channel.read(buff);
           synchronized (stateLock) {
             connectionState = STATE_IDLE;
           }
-          if (amountRead == 0) {
+          if (amt == 0) {
             continue;
           }
-          if (amountRead < 0) {
+          if (amt < 0) {
             this.readerShuttingDown = true;
             try {
               requestClose("SocketChannel.read returned EOF");
+              requestClose(
+                  "SocketChannel.read returned EOF");
             } catch (Exception e) {
               // ignore - shutting down
             }
             return;
           }
 
-          processInputBuffer();
-
+          processNIOBuffer();
           if (!this.isReceiver && (this.handshakeRead || this.handshakeCancelled)) {
             if (logger.isDebugEnabled()) {
               if (this.handshakeRead) {
-                logger.debug("handshake has been read {}", this);
+                logger.debug("{} handshake has been read {}", p2pReaderName(), this);
               } else {
-                logger.debug("handshake has been cancelled {}", this);
+                logger.debug("{} handshake has been cancelled {}", p2pReaderName(), this);
               }
             }
             isHandShakeReader = true;
@@ -1756,7 +1842,7 @@ public class Connection implements Runnable {
           try {
             requestClose(
                 String.format("CacheClosed in channel read: %s", e));
-          } catch (Exception ignored) {
+          } catch (Exception ex) {
           }
           return;
         } catch (ClosedChannelException e) {
@@ -1764,7 +1850,7 @@ public class Connection implements Runnable {
           try {
             requestClose(String.format("ClosedChannelException in channel read: %s",
                 e));
-          } catch (Exception ignored) {
+          } catch (Exception ex) {
           }
           return;
         } catch (IOException e) {
@@ -1787,7 +1873,7 @@ public class Connection implements Runnable {
           try {
             requestClose(
                 String.format("IOException in channel read: %s", e));
-          } catch (Exception ignored) {
+          } catch (Exception ex) {
           }
           return;
 
@@ -1800,7 +1886,7 @@ public class Connection implements Runnable {
           try {
             requestClose(
                 String.format("%s exception in channel read", e));
-          } catch (Exception ignored) {
+          } catch (Exception ex) {
           }
           return;
         }
@@ -1812,42 +1898,9 @@ public class Connection implements Runnable {
         }
       }
       if (logger.isDebugEnabled()) {
-        logger.debug("readMessages terminated id={} from {} isHandshakeReader={}", conduitIdStr,
-            remoteAddr, isHandShakeReader);
-      }
-    }
-  }
-
-  private void createIoFilter(SocketChannel channel, boolean clientSocket) throws IOException {
-    if (getConduit().useSSL() && channel != null) {
-      InetSocketAddress address = (InetSocketAddress) channel.getRemoteAddress();
-      SSLEngine engine =
-          getConduit().getSocketCreator().createSSLEngine(address.getHostName(), address.getPort());
-
-      if (!clientSocket) {
-        engine.setWantClientAuth(true);
-        engine.setNeedClientAuth(true);
+        logger.debug("{} runNioReader terminated id={} from {}", p2pReaderName(), conduitIdStr,
+            remoteAddr);
       }
-
-      int packetBufferSize = engine.getSession().getPacketBufferSize();
-      if (inputBuffer == null
-          || (inputBuffer.capacity() < packetBufferSize)) {
-        // TLS has a minimum input buffer size constraint
-        if (inputBuffer != null) {
-          Buffers.releaseReceiveBuffer(inputBuffer, getConduit().getStats());
-        }
-        inputBuffer = Buffers.acquireReceiveBuffer(packetBufferSize, getConduit().getStats());
-      }
-      if (channel.socket().getReceiveBufferSize() < packetBufferSize) {
-        channel.socket().setReceiveBufferSize(packetBufferSize);
-      }
-      if (channel.socket().getSendBufferSize() < packetBufferSize) {
-        channel.socket().setSendBufferSize(packetBufferSize);
-      }
-      ioFilter = getConduit().getSocketCreator().handshakeSSLSocketChannel(channel, engine,
-          getConduit().idleConnectionTimeout, clientSocket, inputBuffer, getConduit().getStats());
-    } else {
-      ioFilter = new NioPlainEngine();
     }
   }
 
@@ -1867,7 +1920,7 @@ public class Connection implements Runnable {
    * checks to see if an exception should not be logged: i.e., "forcibly closed", "reset by peer",
    * or "connection reset"
    */
-  private static boolean isIgnorableIOException(Exception e) {
+  public static boolean isIgnorableIOException(Exception e) {
     if (e instanceof ClosedChannelException) {
       return true;
     }
@@ -1951,9 +2004,465 @@ public class Connection implements Runnable {
     }
   }
 
+  private void runOioReader() {
+    InputStream input = null;
+    try {
+      if (logger.isDebugEnabled()) {
+        logger.debug("Socket is of type: {}", getSocket().getClass());
+      }
+      input = new BufferedInputStream(getSocket().getInputStream(), INITIAL_CAPACITY);
+    } catch (IOException io) {
+      if (stopped || owner.getConduit().getCancelCriterion().isCancelInProgress()) {
+        return; // bug 37520: exit run loop (and thread)
+      }
+      logger.fatal("Unable to get input stream", io);
+      stopped = true;
+    }
+
+    if (!stopped) {
+      Assert.assertTrue(owner != null,
+          "owner should not be null");
+      if (logger.isDebugEnabled()) {
+        logger.debug("Starting {}", p2pReaderName());
+      }
+    }
+
+    byte[] headerBytes = new byte[MSG_HEADER_BYTES];
+
+    final ByteArrayDataInput dis = new ByteArrayDataInput();
+    while (!stopped) {
+      try {
+        if (SystemFailure.getFailure() != null) {
+          // Allocate no objects here!
+          Socket s = this.socket;
+          if (s != null) {
+            try {
+              s.close();
+            } catch (IOException e) {
+              // don't care
+            }
+          }
+          SystemFailure.checkFailure(); // throws
+        }
+        if (this.owner.getConduit().getCancelCriterion().isCancelInProgress()) {
+          break;
+        }
+        int len = 0;
+        if (readFully(input, headerBytes, headerBytes.length) < 0) {
+          stopped = true;
+          continue;
+        }
+        // long recvNanos = DistributionStats.getStatTime();
+        len = ((headerBytes[MSG_HEADER_SIZE_OFFSET] & 0xff) * 0x1000000)
+            + ((headerBytes[MSG_HEADER_SIZE_OFFSET + 1] & 0xff) * 0x10000)
+            + ((headerBytes[MSG_HEADER_SIZE_OFFSET + 2] & 0xff) * 0x100)
+            + (headerBytes[MSG_HEADER_SIZE_OFFSET + 3] & 0xff);
+        /* byte msgHdrVersion = */ calcHdrVersion(len);
+        len = calcMsgByteSize(len);
+        int msgType = headerBytes[MSG_HEADER_TYPE_OFFSET];
+        short msgId = (short) (((headerBytes[MSG_HEADER_ID_OFFSET] & 0xff) << 8)
+            + (headerBytes[MSG_HEADER_ID_OFFSET + 1] & 0xff));
+        boolean myDirectAck = (msgType & DIRECT_ACK_BIT) != 0;
+        if (myDirectAck) {
+          msgType &= ~DIRECT_ACK_BIT; // clear the bit
+        }
+        // Following validation fixes bug 31145
+        if (!validMsgType(msgType)) {
+          logger.fatal("Unknown P2P message type: {}", Integer.valueOf(msgType));
+          this.readerShuttingDown = true;
+          requestClose(String.format("Unknown P2P message type: %s",
+              Integer.valueOf(msgType)));
+          break;
+        }
+        if (logger.isTraceEnabled())
+          logger.trace("{} reading {} bytes", conduitIdStr, len);
+        byte[] bytes = new byte[len];
+        if (readFully(input, bytes, len) < 0) {
+          stopped = true;
+          continue;
+        }
+        boolean interrupted = Thread.interrupted();
+        try {
+          if (this.handshakeRead) {
+            if (msgType == NORMAL_MSG_TYPE) {
+              // DMStats stats = this.owner.getConduit().stats;
+              // long start = DistributionStats.getStatTime();
+              this.owner.getConduit().getStats().incMessagesBeingReceived(true, len);
+              dis.initialize(bytes, this.remoteVersion);
+              DistributionMessage msg = null;
+              try {
+                ReplyProcessor21.initMessageRPId();
+                long startSer = this.owner.getConduit().getStats().startMsgDeserialization();
+                msg = (DistributionMessage) InternalDataSerializer.readDSFID(dis);
+                this.owner.getConduit().getStats().endMsgDeserialization(startSer);
+                if (dis.available() != 0) {
+                  logger.warn("Message deserialization of {} did not read {} bytes.",
+                      msg, Integer.valueOf(dis.available()));
+                }
+                // stats.incBatchCopyTime(start);
+                try {
+                  // start = DistributionStats.getStatTime();
+                  if (!dispatchMessage(msg, len, myDirectAck)) {
+                    continue;
+                  }
+                  // stats.incBatchSendTime(start);
+                } catch (MemberShunnedException e) {
+                  continue;
+                } catch (Exception de) {
+                  this.owner.getConduit().getCancelCriterion().checkCancelInProgress(de); // bug
+                                                                                          // 37101
+                  logger.fatal("Error dispatching message", de);
+                }
+              } catch (VirtualMachineError err) {
+                SystemFailure.initiateFailure(err);
+                // If this ever returns, rethrow the error. We're poisoned
+                // now, so don't let this thread continue.
+                throw err;
+              } catch (Throwable e) {
+                // Whenever you catch Error or Throwable, you must also
+                // catch VirtualMachineError (see above). However, there is
+                // _still_ a possibility that you are dealing with a cascading
+                // error condition, so you also need to check to see if the JVM
+                // is still usable:
+                SystemFailure.checkFailure();
+                // In particular I want OutOfMem to be caught here
+                if (!myDirectAck) {
+                  String reason =
+                      "Error deserializing message";
+                  sendFailureReply(ReplyProcessor21.getMessageRPId(), reason, e, myDirectAck);
+                }
+                if (e instanceof CancelException) {
+                  if (!(e instanceof CacheClosedException)) {
+                    // Just log a message if we had trouble deserializing due to
+                    // CacheClosedException; see bug 43543
+                    throw (CancelException) e;
+                  }
+                }
+                logger.fatal("Error deserializing message", e);
+                // requestClose();
+                // return;
+              } finally {
+                ReplyProcessor21.clearMessageRPId();
+              }
+            } else if (msgType == CHUNKED_MSG_TYPE) {
+              MsgDestreamer md = obtainMsgDestreamer(msgId, remoteVersion);
+              this.owner.getConduit().getStats().incMessagesBeingReceived(md.size() == 0, len);
+              try {
+                md.addChunk(bytes);
+              } catch (IOException ex) {
+                logger.fatal("Failed handling chunk message", ex);
+              }
+            } else /* (messageType == END_CHUNKED_MSG_TYPE) */ {
+              MsgDestreamer md = obtainMsgDestreamer(msgId, remoteVersion);
+              this.owner.getConduit().getStats().incMessagesBeingReceived(md.size() == 0, len);
+              try {
+                md.addChunk(bytes);
+              } catch (IOException ex) {
+                logger.fatal("Failed handling end chunk message", ex);
+              }
+              DistributionMessage msg = null;
+              int msgLength = 0;
+              String failureMsg = null;
+              Throwable failureEx = null;
+              int rpId = 0;
+              try {
+                msg = md.getMessage();
+              } catch (ClassNotFoundException ex) {
+                this.owner.getConduit().getStats().decMessagesBeingReceived(md.size());
+                failureEx = ex;
+                rpId = md.getRPid();
+                logger.warn("ClassNotFound deserializing message: {}", ex.toString());
+              } catch (IOException ex) {
+                this.owner.getConduit().getStats().decMessagesBeingReceived(md.size());
+                failureMsg = "IOException deserializing message";
+                failureEx = ex;
+                rpId = md.getRPid();
+                logger.fatal("IOException deserializing message", failureEx);
+              } catch (InterruptedException ex) {
+                Thread.currentThread().interrupt();
+                throw ex; // caught by outer try
+              } catch (VirtualMachineError err) {
+                SystemFailure.initiateFailure(err);
+                // If this ever returns, rethrow the error. We're poisoned
+                // now, so don't let this thread continue.
+                throw err;
+              } catch (Throwable ex) {
+                // Whenever you catch Error or Throwable, you must also
+                // catch VirtualMachineError (see above). However, there is
+                // _still_ a possibility that you are dealing with a cascading
+                // error condition, so you also need to check to see if the JVM
+                // is still usable:
+                SystemFailure.checkFailure();
+                this.owner.getConduit().getStats().decMessagesBeingReceived(md.size());
+                failureMsg = "Unexpected failure deserializing message";
+                failureEx = ex;
+                rpId = md.getRPid();
+                logger.fatal("Unexpected failure deserializing message",
+                    failureEx);
+              } finally {
+                msgLength = md.size();
+                releaseMsgDestreamer(msgId, md);
+              }
+              if (msg != null) {
+                try {
+                  if (!dispatchMessage(msg, msgLength, myDirectAck)) {
+                    continue;
+                  }
+                } catch (MemberShunnedException e) {
+                  continue;
+                } catch (Exception de) {
+                  this.owner.getConduit().getCancelCriterion().checkCancelInProgress(de);
+                  logger.fatal("Error dispatching message", de);
+                } catch (ThreadDeath td) {
+                  throw td;
+                } catch (VirtualMachineError err) {
+                  SystemFailure.initiateFailure(err);
+                  // If this ever returns, rethrow the error. We're poisoned
+                  // now, so don't let this thread continue.
+                  throw err;
+                } catch (Throwable t) {
+                  // Whenever you catch Error or Throwable, you must also
+                  // catch VirtualMachineError (see above). However, there is
+                  // _still_ a possibility that you are dealing with a cascading
+                  // error condition, so you also need to check to see if the JVM
+                  // is still usable:
+                  SystemFailure.checkFailure();
+                  logger.fatal("Throwable dispatching message", t);
+                }
+              } else if (failureEx != null) {
+                sendFailureReply(rpId, failureMsg, failureEx, myDirectAck);
+              }
+            }
+          } else {
+            dis.initialize(bytes, null);
+            if (!this.isReceiver) {
+              this.replyCode = dis.readUnsignedByte();
+              if (this.replyCode != REPLY_CODE_OK
+                  && this.replyCode != REPLY_CODE_OK_WITH_ASYNC_INFO) {
+                Integer replyCodeInteger = Integer.valueOf(this.replyCode);
+                String err = String.format("Unknown handshake reply code: %s",
+                    replyCodeInteger);
+
+                if (this.replyCode == 0) { // bug 37113
+                  if (logger.isDebugEnabled()) {
+                    logger.debug("{} (peer probably departed ungracefully)", err);
+                  }
+                } else {
+                  logger.fatal("Unknown handshake reply code: {}",
+                      replyCodeInteger);
+                }
+                this.readerShuttingDown = true;
+                requestClose(err);
+                break;
+              }
+              if (this.replyCode == REPLY_CODE_OK_WITH_ASYNC_INFO) {
+                this.asyncDistributionTimeout = dis.readInt();
+                this.asyncQueueTimeout = dis.readInt();
+                this.asyncMaxQueueSize = (long) dis.readInt() * (1024 * 1024);
+                if (this.asyncDistributionTimeout != 0) {
+                  logger.info("{} async configuration received {}.",
+                      p2pReaderName(),
+                      " asyncDistributionTimeout=" + this.asyncDistributionTimeout
+                          + " asyncQueueTimeout=" + this.asyncQueueTimeout
+                          + " asyncMaxQueueSize="
+                          + (this.asyncMaxQueueSize / (1024 * 1024)));
+                }
+                // read the product version ordinal for on-the-fly serialization
+                // transformations (for rolling upgrades)
+                this.remoteVersion = Version.readVersion(dis, true);
+              }
+              notifyHandshakeWaiter(true);
+            } else {
+              byte b = dis.readByte();
+              if (b != 0) {
+                throw new IllegalStateException(
+                    String.format(
+                        "Detected old version (pre 5.0.1) of GemFire or non-GemFire during handshake due to initial byte being %s",
+                        new Byte(b)));
+              }
+              byte handshakeByte = dis.readByte();
+              if (handshakeByte != HANDSHAKE_VERSION) {
+                throw new IllegalStateException(
+                    String.format(
+                        "Detected wrong version of GemFire product during handshake. Expected %s but found %s",
+
+                        new Object[] {new Byte(HANDSHAKE_VERSION), new Byte(handshakeByte)}));
+              }
+              InternalDistributedMember remote = DSFIDFactory.readInternalDistributedMember(dis);
+              setRemoteAddr(remote);
+              Thread.currentThread().setName(String.format("P2P message reader for %s on port %s",
+                  this.remoteAddr, this.socket.getPort()));
+              this.sharedResource = dis.readBoolean();
+              this.preserveOrder = dis.readBoolean();
+              this.uniqueId = dis.readLong();
+              // read the product version ordinal for on-the-fly serialization
+              // transformations (for rolling upgrades)
+              this.remoteVersion = Version.readVersion(dis, true);
+              int dominoNumber = 0;
+              if (this.remoteVersion == null
+                  || (this.remoteVersion.compareTo(Version.GFE_80) >= 0)) {
+                dominoNumber = dis.readInt();
+                if (this.sharedResource) {
+                  dominoNumber = 0;
+                }
+                dominoCount.set(dominoNumber);
+                // this.senderName = dis.readUTF();
+                setThreadName(dominoNumber);
+              }
+
+              if (!this.sharedResource) {
+                if (tipDomino()) {
+                  logger
+                      .info("thread owned receiver forcing itself to send on thread owned sockets");
+                  // bug #49565 - if domino count is >= 2 use shared resources.
+                  // Also see DistributedCacheOperation#supportsDirectAck
+                } else { // if (dominoNumber < 2){
+                  ConnectionTable.threadWantsOwnResources();
+                  if (logger.isDebugEnabled()) {
+                    logger.debug(
+                        "thread-owned receiver with domino count of {} will prefer sending on thread-owned sockets",
+                        dominoNumber);
+                  }
+                  // } else {
+                  // ConnectionTable.threadWantsSharedResources();
+                  // logger.fine("thread-owned receiver with domino count of " + dominoNumber + "
+                  // will prefer shared sockets");
+                }
+                this.conduit.getStats().incThreadOwnedReceivers(1L, dominoNumber);
+              }
+
+              if (logger.isDebugEnabled()) {
+                logger.debug("{} remoteAddr is {} {}", p2pReaderName(), this.remoteAddr,
+                    (this.remoteVersion != null ? " (" + this.remoteVersion + ')' : ""));
+              }
+
+              String authInit = System.getProperty(
+                  DistributionConfigImpl.SECURITY_SYSTEM_PREFIX + SECURITY_PEER_AUTH_INIT);
+              boolean isSecure = authInit != null && authInit.length() != 0;
+
+              if (isSecure) {
+                // ARB: wait till member authentication has been confirmed?
+                if (owner.getConduit().waitForMembershipCheck(this.remoteAddr)) {
+                  sendOKHandshakeReply(); // fix for bug 33224
+                  notifyHandshakeWaiter(true);
+                } else {
+                  // ARB: throw exception??
+                  notifyHandshakeWaiter(false);
+                  logger.warn("{} timed out during a membership check.",
+                      p2pReaderName());
+                }
+              } else {
+                sendOKHandshakeReply(); // fix for bug 33224
+                notifyHandshakeWaiter(true);
+              }
+            }
+            if (!this.isReceiver && (this.handshakeRead || this.handshakeCancelled)) {
+              if (logger.isDebugEnabled()) {
+                if (this.handshakeRead) {
+                  logger.debug("{} handshake has been read {}", p2pReaderName(), this);
+                } else {
+                  logger.debug("{} handshake has been cancelled {}", p2pReaderName(), this);
+                }
+              }
+              // Once we have read the handshake the reader can go away
+              break;
+            }
+            continue;
+          }
+        } catch (InterruptedException e) {
+          interrupted = true;
+          this.owner.getConduit().getCancelCriterion().checkCancelInProgress(e);
+          logger.fatal(String.format("%s Stray interrupt reading message", p2pReaderName()), e);
+          continue;
+        } catch (Exception ioe) {
+          this.owner.getConduit().getCancelCriterion().checkCancelInProgress(ioe); // bug 37101
+          if (!stopped) {
+            logger.fatal(String.format("%s Error reading message", p2pReaderName()), ioe);
+          }
+          continue;
+        } finally {
+          if (interrupted) {
+            Thread.currentThread().interrupt();
+          }
+        }
+      } catch (CancelException e) {
+        if (logger.isDebugEnabled()) {
+          String ccMsg = p2pReaderName() + " Cancelled: " + this;
+          if (e.getMessage() != null) {
+            ccMsg += ": " + e.getMessage();
+          }
+          logger.debug(ccMsg);
+        }
+        this.readerShuttingDown = true;
+        try {
+          requestClose(
+              String.format("CacheClosed in channel read: %s", e));
+        } catch (Exception ex) {
+        }
+        this.stopped = true;
+      } catch (IOException io) {
+        boolean closed = isSocketClosed() || "Socket closed".equalsIgnoreCase(io.getMessage()); // needed
+                                                                                                // for
+                                                                                                // Solaris
+                                                                                                // jdk
+                                                                                                // 1.4.2_08
+        if (!closed) {
+          if (logger.isDebugEnabled() && !isIgnorableIOException(io)) {
+            logger.debug("{} io exception for {}", p2pReaderName(), this, io);
+          }
+        }
+        this.readerShuttingDown = true;
+        try {
+          requestClose(String.format("IOException received: %s", io));
+        } catch (Exception ex) {
+        }
+
+        if (closed) {
+          stopped = true;
+        } else {
+          // sleep a bit to avoid a hot error loop
+          try {
+            Thread.sleep(1000);
+          } catch (InterruptedException ie) {
+            Thread.currentThread().interrupt();
+            if (this.owner.getConduit().getCancelCriterion().isCancelInProgress()) {
+              return;
+            }
+            break;
+          }
+        }
+      } // IOException
+      catch (Exception e) {
+        if (this.owner.getConduit().getCancelCriterion().isCancelInProgress()) {
+          return; // bug 37101
+        }
+        if (!stopped && !(e instanceof InterruptedException)) {
+          logger.fatal(String.format("%s exception received",
+              p2pReaderName()), e);
+        }
+        if (isSocketClosed()) {
+          stopped = true;
+        } else {
+          this.readerShuttingDown = true;
+          try {
+            requestClose(String.format("%s exception received", e));
+          } catch (Exception ex) {
+          }
+
+          // sleep a bit to avoid a hot error loop
+          try {
+            Thread.sleep(1000);
+          } catch (InterruptedException ie) {
+            Thread.currentThread().interrupt();
+            break;
+          }
+        }
+      }
+    }
+  }
 
   @edu.umd.cs.findbugs.annotations.SuppressWarnings(value = "DE_MIGHT_IGNORE")
-  void readFully(InputStream input, byte[] buffer, int len) throws IOException {
+  int readFully(InputStream input, byte[] buffer, int len) throws IOException {
     int bytesSoFar = 0;
     while (bytesSoFar < len) {
       this.owner.getConduit().getCancelCriterion().checkCancelInProgress(null);
@@ -1966,9 +2475,9 @@ public class Connection implements Runnable {
           this.readerShuttingDown = true;
           try {
             requestClose("Stream read returned non-positive length");
-          } catch (Exception ignored) {
+          } catch (Exception ex) {
           }
-          return;
+          return -1;
         }
         bytesSoFar += bytesThisTime;
       } catch (InterruptedIOException io) {
@@ -1976,7 +2485,7 @@ public class Connection implements Runnable {
         this.readerShuttingDown = true;
         try {
           requestClose("Current thread interrupted");
-        } catch (Exception ignored) {
+        } catch (Exception ex) {
         }
         Thread.currentThread().interrupt();
         this.owner.getConduit().getCancelCriterion().checkCancelInProgress(null);
@@ -1986,6 +2495,7 @@ public class Connection implements Runnable {
         }
       }
     } // while
+    return len;
   }
 
   /**
@@ -1994,7 +2504,7 @@ public class Connection implements Runnable {
    *
    * @throws ConnectionException if the conduit has stopped
    */
-  void sendPreserialized(ByteBuffer buffer, boolean cacheContentChanges,
+  public void sendPreserialized(ByteBuffer buffer, boolean cacheContentChanges,
       DistributionMessage msg) throws IOException, ConnectionException {
     if (!connected) {
       throw new ConnectionException(
@@ -2012,8 +2522,21 @@ public class Connection implements Runnable {
     }
     this.socketInUse = true;
     try {
-      SocketChannel channel = getSocket().getChannel();
-      writeFully(channel, buffer, false, msg);
+      if (useNIO()) {
+        SocketChannel channel = getSocket().getChannel();
+        nioWriteFully(channel, buffer, false, msg);
+      } else {
+        if (buffer.hasArray()) {
+          this.output.write(buffer.array(), buffer.arrayOffset(),
+              buffer.limit() - buffer.position());
+        } else {
+          byte[] bytesToWrite = getBytesToWrite(buffer);
+          synchronized (outLock) {
+            this.output.write(bytesToWrite);
+            this.output.flush();
+          }
+        }
+      }
       if (cacheContentChanges) {
         messagesSent++;
       }
@@ -2064,7 +2587,7 @@ public class Connection implements Runnable {
   /**
    * For testing we want to configure the connection without having to read a handshake
    */
-  void setSharedUnorderedForTest() {
+  protected void setSharedUnorderedForTest() {
     this.preserveOrder = false;
     this.sharedResource = true;
     this.handshakeRead = true;
@@ -2072,7 +2595,7 @@ public class Connection implements Runnable {
 
 
   /** ensure that a task is running to monitor transmission and reading of acks */
-  synchronized void scheduleAckTimeouts() {
+  public synchronized void scheduleAckTimeouts() {
     if (ackTimeoutTask == null) {
       final long msAW = this.owner.getDM().getConfig().getAckWaitThreshold() * 1000L;
       final long msSA = this.owner.getDM().getConfig().getAckSevereAlertThreshold() * 1000L;
@@ -2136,11 +2659,11 @@ public class Connection implements Runnable {
   }
 
   /** ack-wait-threshold and ack-severe-alert-threshold processing */
-  private boolean doSevereAlertProcessing() {
+  protected boolean doSevereAlertProcessing() {
     long now = System.currentTimeMillis();
     if (ackSATimeout > 0 && (transmissionStartTime + ackWaitTimeout + ackSATimeout) <= now) {
       logger.fatal("{} seconds have elapsed waiting for a response from {} for thread {}",
-          (ackWaitTimeout + ackSATimeout) / 1000L,
+          Long.valueOf((ackWaitTimeout + ackSATimeout) / 1000),
           getRemoteAddress(),
           ackThreadName);
       // turn off subsequent checks by setting the timeout to zero, then boot the member
@@ -2149,7 +2672,7 @@ public class Connection implements Runnable {
     } else if (!ackTimedOut && (0 < ackWaitTimeout)
         && (transmissionStartTime + ackWaitTimeout) <= now) {
       logger.warn("{} seconds have elapsed waiting for a response from {} for thread {}",
-          ackWaitTimeout / 1000L, getRemoteAddress(), ackThreadName);
+          Long.valueOf(ackWaitTimeout / 1000), getRemoteAddress(), ackThreadName);
       ackTimedOut = true;
 
       final String state = (connectionState == Connection.STATE_SENDING)
@@ -2163,6 +2686,12 @@ public class Connection implements Runnable {
     return false;
   }
 
+  private static byte[] getBytesToWrite(ByteBuffer buffer) {
+    byte[] bytesToWrite = new byte[buffer.limit()];
+    buffer.get(bytesToWrite);
+    return bytesToWrite;
+  }
+
   private boolean addToQueue(ByteBuffer buffer, DistributionMessage msg, boolean force)
       throws ConnectionException {
     final DMStats stats = this.owner.getConduit().getStats();
@@ -2287,20 +2816,20 @@ public class Connection implements Runnable {
     if (!addToQueue(buffer, msg, true)) {
       return false;
     } else {
-      startMessagePusher();
+      startNioPusher();
       return true;
     }
   }
 
-  private final Object pusherSync = new Object();
+  private final Object nioPusherSync = new Object();
 
-  private void startMessagePusher() {
-    synchronized (this.pusherSync) {
+  private void startNioPusher() {
+    synchronized (this.nioPusherSync) {
       while (this.pusherThread != null) {
         // wait for previous pusher thread to exit
         boolean interrupted = Thread.interrupted();
         try {
-          this.pusherSync.wait(); // spurious wakeup ok
+          this.nioPusherSync.wait(); // spurious wakeup ok
         } catch (InterruptedException ex) {
           interrupted = true;
           this.owner.getConduit().getCancelCriterion().checkCancelInProgress(ex);
@@ -2312,7 +2841,7 @@ public class Connection implements Runnable {
       }
       this.asyncQueuingInProgress = true;
       this.pusherThread =
-          new LoggingThread("P2P async pusher to " + this.remoteAddr, this::runMessagePusher);
+          new LoggingThread("P2P async pusher to " + this.remoteAddr, this::runNioPusher);
     } // synchronized
     this.pusherThread.start();
   }
@@ -2418,7 +2947,7 @@ public class Connection implements Runnable {
   /**
    * have the pusher thread check for queue overflow and for idle time exceeded
    */
-  private void runMessagePusher() {
+  protected void runNioPusher() {
     try {
       final DMStats stats = this.owner.getConduit().getStats();
       final long threadStart = stats.startAsyncThread();
@@ -2434,8 +2963,6 @@ public class Connection implements Runnable {
               Socket s = this.socket;
               if (s != null) {
                 try {
-                  logger.debug("closing socket", new Exception("closing socket"));
-                  ioFilter.close(s.getChannel());
                   s.close();
                 } catch (IOException e) {
                   // don't care
@@ -2466,7 +2993,7 @@ public class Connection implements Runnable {
                 }
                 return;
               }
-              writeFully(channel, bb, true, null);
+              nioWriteFully(channel, bb, true, null);
               // We should not add messagesSent here according to Bruce.
               // The counts are increased elsewhere.
               // messagesSent++;
@@ -2500,7 +3027,7 @@ public class Connection implements Runnable {
         }
       } catch (CancelException ex) { // bug 37367
         final String err = String.format("P2P pusher %s caught CacheClosedException: %s",
-            this, ex);
+            new Object[] {this, ex});
         logger.debug(err);
         try {
           requestClose(err);
@@ -2523,14 +3050,14 @@ public class Connection implements Runnable {
         stats.incAsyncThreads(-1);
         stats.incAsyncQueues(-1);
         if (logger.isDebugEnabled()) {
-          logger.debug("runMessagePusher terminated id={} from {}/{}", conduitIdStr, remoteAddr,
+          logger.debug("runNioPusher terminated id={} from {}/{}", conduitIdStr, remoteAddr,
               remoteAddr);
         }
       }
     } finally {
-      synchronized (this.pusherSync) {
+      synchronized (this.nioPusherSync) {
         this.pusherThread = null;
-        this.pusherSync.notify();
+        this.nioPusherSync.notify();
       }
     }
   }
@@ -2605,7 +3132,6 @@ public class Connection implements Runnable {
         long queueTimeoutTarget = now + this.asyncQueueTimeout;
         channel.configureBlocking(false);
         try {
-          ByteBuffer wrappedBuffer = ioFilter.wrap(buffer);
           do {
             this.owner.getConduit().getCancelCriterion().checkCancelInProgress(null);
             retries++;
@@ -2613,7 +3139,7 @@ public class Connection implements Runnable {
             if (FORCE_ASYNC_QUEUE) {
               amtWritten = 0;
             } else {
-              amtWritten = channel.write(wrappedBuffer);
+              amtWritten = channel.write(buffer);
             }
             if (amtWritten == 0) {
               now = System.currentTimeMillis();
@@ -2640,7 +3166,7 @@ public class Connection implements Runnable {
                     // the partial msg a candidate for conflation.
                     msg = null;
                   }
-                  if (handleBlockedWrite(wrappedBuffer, msg)) {
+                  if (handleBlockedWrite(buffer, msg)) {
                     return;
                   }
                 }
@@ -2651,8 +3177,8 @@ public class Connection implements Runnable {
                 if (curQueuedBytes > this.asyncMaxQueueSize) {
                   logger.warn(
                       "Queued bytes {} exceeds max of {}, asking slow receiver {} to disconnect.",
-                      curQueuedBytes,
-                      this.asyncMaxQueueSize, this.remoteAddr);
+                      Long.valueOf(curQueuedBytes),
+                      Long.valueOf(this.asyncMaxQueueSize), this.remoteAddr);
                   stats.incAsyncQueueSizeExceeded(1);
                   disconnectNeeded = true;
                 }
@@ -2663,8 +3189,8 @@ public class Connection implements Runnable {
                   blockedMs += this.asyncQueueTimeout;
                   logger.warn(
                       "Blocked for {}ms which is longer than the max of {}ms, asking slow receiver {} to disconnect.",
-                      blockedMs,
-                      this.asyncQueueTimeout, this.remoteAddr);
+                      Long.valueOf(blockedMs),
+                      Integer.valueOf(this.asyncQueueTimeout), this.remoteAddr);
                   stats.incAsyncQueueTimeouts(1);
                   disconnectNeeded = true;
                 }
@@ -2714,7 +3240,7 @@ public class Connection implements Runnable {
               queueTimeoutTarget = System.currentTimeMillis() + this.asyncQueueTimeout;
               waitTime = 1;
             }
-          } while (wrappedBuffer.remaining() > 0);
+          } while (buffer.remaining() > 0);
         } finally {
           channel.configureBlocking(true);
         }
@@ -2730,12 +3256,12 @@ public class Connection implements Runnable {
   }
 
   /**
-   * writeFully implements a blocking write on a channel that is in non-blocking mode.
+   * nioWriteFully implements a blocking write on a channel that is in non-blocking mode.
    *
    * @param forceAsync true if we need to force a blocking async write.
    * @throws ConnectionException if the conduit has stopped
    */
-  void writeFully(SocketChannel channel, ByteBuffer buffer, boolean forceAsync,
+  protected void nioWriteFully(SocketChannel channel, ByteBuffer buffer, boolean forceAsync,
       DistributionMessage msg) throws IOException, ConnectionException {
     final DMStats stats = this.owner.getConduit().getStats();
     if (!this.sharedResource) {
@@ -2757,17 +3283,17 @@ public class Connection implements Runnable {
           }
           // fall through
         }
-        ByteBuffer wrappedBuffer = ioFilter.wrap(buffer);
-        while (wrappedBuffer.remaining() > 0) {
+        do {
           int amtWritten = 0;
           long start = stats.startSocketWrite(true);
           try {
-            amtWritten = channel.write(wrappedBuffer);
+            // this.writerThread = Thread.currentThread();
+            amtWritten = channel.write(buffer);
           } finally {
             stats.endSocketWrite(true, start, amtWritten, 0);
+            // this.writerThread = null;
           }
-        }
-
+        } while (buffer.remaining() > 0);
       } // synchronized
     } else {
       writeAsync(channel, buffer, forceAsync, msg, stats);
@@ -2775,15 +3301,16 @@ public class Connection implements Runnable {
   }
 
   /** gets the buffer for receiving message length bytes */
-  private ByteBuffer getInputBuffer() {
-    if (inputBuffer == null) {
+  protected ByteBuffer getNIOBuffer() {
+    final DMStats stats = this.owner.getConduit().getStats();
+    if (nioInputBuffer == null) {
       int allocSize = this.recvBufferSize;
       if (allocSize == -1) {
         allocSize = this.owner.getConduit().tcpBufferSize;
       }
-      inputBuffer = Buffers.acquireReceiveBuffer(allocSize, this.owner.getConduit().getStats());
+      nioInputBuffer = Buffers.acquireReceiveBuffer(allocSize, stats);
     }
-    return inputBuffer;
+    return nioInputBuffer;
   }
 
   /**
@@ -2796,28 +3323,30 @@ public class Connection implements Runnable {
 
   /* ~~~~~~~~~~~~~ connection states ~~~~~~~~~~~~~~~ */
   /** the connection is idle, but may be in use */
-  private static final byte STATE_IDLE = 0;
+  protected static final byte STATE_IDLE = 0;
   /** the connection is in use and is transmitting data */
-  private static final byte STATE_SENDING = 1;
+  protected static final byte STATE_SENDING = 1;
   /** the connection is in use and is done transmitting */
-  private static final byte STATE_POST_SENDING = 2;
+  protected static final byte STATE_POST_SENDING = 2;
   /** the connection is in use and is reading a direct-ack */
-  private static final byte STATE_READING_ACK = 3;
+  protected static final byte STATE_READING_ACK = 3;
   /** the connection is in use and has finished reading a direct-ack */
-  private static final byte STATE_RECEIVED_ACK = 4;
+  protected static final byte STATE_RECEIVED_ACK = 4;
   /** the connection is in use and is reading a message */
-  private static final byte STATE_READING = 5;
+  protected static final byte STATE_READING = 5;
   /* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */
 
   /** set to true if we exceeded the ack-wait-threshold waiting for a response */
-  private volatile boolean ackTimedOut;
+  protected volatile boolean ackTimedOut;
 
   /**
-   * @throws SocketTimeoutException if wait expires.
-   * @throws ConnectionException if ack is not received
+   * @param msToWait number of milliseconds to wait for an ack. If 0 then wait forever.
+   * @param msInterval interval between checks
+   * @throws SocketTimeoutException if msToWait expires.
+   * @throws ConnectionException if ack is not received (fixes bug 34312)
    */
-  public void readAck(final DirectReplyProcessor processor)
-      throws SocketTimeoutException, ConnectionException {
+  public void readAck(final int msToWait, final long msInterval,
+      final DirectReplyProcessor processor) throws SocketTimeoutException, ConnectionException {
     if (isSocketClosed()) {
       throw new ConnectionException(
           "connection is closed");
@@ -2832,24 +3361,28 @@ public class Connection implements Runnable {
     DMStats stats = owner.getConduit().getStats();
     final Version version = getRemoteVersion();
     try {
-      msgReader = new MsgReader(this, ioFilter, getInputBuffer(), version);
+      if (useNIO()) {
+        msgReader = new NIOMsgReader(this, version);
+      } else {
+        msgReader = new OioMsgReader(this, version);
+      }
 
       Header header = msgReader.readHeader();
 
       ReplyMessage msg = null;
       int len;
-      if (header.getMessageType() == NORMAL_MSG_TYPE) {
+      if (header.getNioMessageType() == NORMAL_MSG_TYPE) {
         msg = (ReplyMessage) msgReader.readMessage(header);
-        len = header.getMessageLength();
+        len = header.getNioMessageLength();
       } else {
-        MsgDestreamer destreamer = obtainMsgDestreamer(header.getMessageId(), version);
-        while (header.getMessageType() == CHUNKED_MSG_TYPE) {
+        MsgDestreamer destreamer = obtainMsgDestreamer(header.getNioMessageId(), version);
+        while (header.getNioMessageType() == CHUNKED_MSG_TYPE) {
           msgReader.readChunk(header, destreamer);
           header = msgReader.readHeader();
         }
         msgReader.readChunk(header, destreamer);
         msg = (ReplyMessage) destreamer.getMessage();
-        releaseMsgDestreamer(header.getMessageId(), destreamer);
+        releaseMsgDestreamer(header.getNioMessageId(), destreamer);
         len = destreamer.size();
       }
       // I'd really just like to call dispatchMessage here. However,
@@ -2907,6 +3440,9 @@ public class Connection implements Runnable {
             getRemoteAddress());
         this.ackTimedOut = false;
       }
+      if (msgReader != null) {
+        msgReader.close();
+      }
     }
     synchronized (stateLock) {
       this.connectionState = STATE_RECEIVED_ACK;
@@ -2917,52 +3453,391 @@ public class Connection implements Runnable {
    * processes the current NIO buffer. If there are complete messages in the buffer, they are
    * deserialized and passed to TCPConduit for further processing
    */
-  private void processInputBuffer() throws ConnectionException, IOException {
-
-    inputBuffer.flip();
-
-    ByteBuffer peerDataBuffer = ioFilter.unwrap(inputBuffer);
-    peerDataBuffer.flip();
-
+  private void processNIOBuffer() throws ConnectionException, IOException {
+    if (nioInputBuffer != null) {
+      nioInputBuffer.flip();
+    }
     boolean done = false;
 
     while (!done && connected) {
       this.owner.getConduit().getCancelCriterion().checkCancelInProgress(null);
-      int remaining = peerDataBuffer.remaining();
-      if (lengthSet || remaining >= MSG_HEADER_BYTES) {
-        if (!lengthSet) {
-          if (readMessageHeader(peerDataBuffer)) {
+      // long startTime = DistributionStats.getStatTime();
+      int remaining = nioInputBuffer.remaining();
+      if (nioLengthSet || remaining >= MSG_HEADER_BYTES) {
+        if (!nioLengthSet) {
+          int headerStartPos = nioInputBuffer.position();
+          nioMessageLength = nioInputBuffer.getInt();
+          /* nioMessageVersion = */ calcHdrVersion(nioMessageLength);
+          nioMessageLength = calcMsgByteSize(nioMessageLength);
+          nioMessageType = nioInputBuffer.get();
+          nioMsgId = nioInputBuffer.getShort();
+          directAck = (nioMessageType & DIRECT_ACK_BIT) != 0;
+          if (directAck) {
+            nioMessageType &= ~DIRECT_ACK_BIT; // clear the ack bit
+          }
+          // Following validation fixes bug 31145
+          if (!validMsgType(nioMessageType)) {
+            Integer nioMessageTypeInteger = Integer.valueOf(nioMessageType);
+            logger.fatal("Unknown P2P message type: {}", nioMessageTypeInteger);
+            this.readerShuttingDown = true;
+            requestClose(String.format("Unknown P2P message type: %s",
+                nioMessageTypeInteger));
             break;
           }
+          nioLengthSet = true;
+          // keep the header "in" the buffer until we have read the entire msg.
+          // Trust me: this will reduce copying on large messages.
+          nioInputBuffer.position(headerStartPos);
         }
-        if (remaining >= messageLength + MSG_HEADER_BYTES) {
-          lengthSet = false;
-          peerDataBuffer.position(peerDataBuffer.position() + MSG_HEADER_BYTES);
+        if (remaining >= nioMessageLength + MSG_HEADER_BYTES) {
+          nioLengthSet = false;
+          nioInputBuffer.position(nioInputBuffer.position() + MSG_HEADER_BYTES);
           // don't trust the message deserialization to leave the position in
           // the correct spot. Some of the serialization uses buffered
           // streams that can leave the position at the wrong spot
-          int startPos = peerDataBuffer.position();
-          int oldLimit = peerDataBuffer.limit();
-          peerDataBuffer.limit(startPos + messageLength);
-
+          int startPos = nioInputBuffer.position();
+          int oldLimit = nioInputBuffer.limit();
+          nioInputBuffer.limit(startPos + nioMessageLength);
           if (this.handshakeRead) {
-            try {
-              readMessage(peerDataBuffer);
-            } catch (SerializationException e) {
-              logger.info("input buffer startPos {} oldLimit {}", startPos, oldLimit);
-              throw e;
+            if (nioMessageType == NORMAL_MSG_TYPE) {
+              this.owner.getConduit().getStats().incMessagesBeingReceived(true, nioMessageLength);
+              ByteBufferInputStream bbis =
+                  remoteVersion == null ? new ByteBufferInputStream(nioInputBuffer)
+                      : new VersionedByteBufferInputStream(nioInputBuffer, remoteVersion);
+              DistributionMessage msg = null;
+              try {
+                ReplyProcessor21.initMessageRPId();
+                // add serialization stats
+                long startSer = this.owner.getConduit().getStats().startMsgDeserialization();
+                msg = (DistributionMessage) InternalDataSerializer.readDSFID(bbis);
+                this.owner.getConduit().getStats().endMsgDeserialization(startSer);
+                if (bbis.available() != 0) {
+                  logger.warn("Message deserialization of {} did not read {} bytes.",
+                      msg, Integer.valueOf(bbis.available()));
+                }
+                try {
+                  if (!dispatchMessage(msg, nioMessageLength, directAck)) {
+                    directAck = false;
+                  }
+                } catch (MemberShunnedException e) {
+                  directAck = false; // don't respond (bug39117)
+                } catch (Exception de) {
+                  this.owner.getConduit().getCancelCriterion().checkCancelInProgress(de);
+                  logger.fatal("Error dispatching message", de);
+                } catch (ThreadDeath td) {
+                  throw td;
+                } catch (VirtualMachineError err) {
+                  SystemFailure.initiateFailure(err);
+                  // If this ever returns, rethrow the error. We're poisoned
+                  // now, so don't let this thread continue.
+                  throw err;
+                } catch (Throwable t) {
+                  // Whenever you catch Error or Throwable, you must also
+                  // catch VirtualMachineError (see above). However, there is
+                  // _still_ a possibility that you are dealing with a cascading
+                  // error condition, so you also need to check to see if the JVM
+                  // is still usable:
+                  SystemFailure.checkFailure();
+                  logger.fatal("Throwable dispatching message", t);
+                }
+              } catch (VirtualMachineError err) {
+                SystemFailure.initiateFailure(err);
+                // If this ever returns, rethrow the error. We're poisoned
+                // now, so don't let this thread continue.
+                throw err;
+              } catch (Throwable t) {
+                // Whenever you catch Error or Throwable, you must also
+                // catch VirtualMachineError (see above). However, there is
+                // _still_ a possibility that you are dealing with a cascading
+                // error condition, so you also need to check to see if the JVM
+                // is still usable:
+                SystemFailure.checkFailure();
+                sendFailureReply(ReplyProcessor21.getMessageRPId(),
+                    "Error deserializing message", t,
+                    directAck);
+                if (t instanceof ThreadDeath) {
+                  throw (ThreadDeath) t;
+                }
+                if (t instanceof CancelException) {
+                  if (!(t instanceof CacheClosedException)) {
+                    // Just log a message if we had trouble deserializing due to
+                    // CacheClosedException; see bug 43543
+                    throw (CancelException) t;
+                  }
+                }
+                logger.fatal("Error deserializing message", t);
+              } finally {
+                ReplyProcessor21.clearMessageRPId();
+              }
+            } else if (nioMessageType == CHUNKED_MSG_TYPE) {
+              MsgDestreamer md = obtainMsgDestreamer(nioMsgId, remoteVersion);
+              this.owner.getConduit().getStats().incMessagesBeingReceived(md.size() == 0,
+                  nioMessageLength);
+              try {
+                md.addChunk(nioInputBuffer, nioMessageLength);
+              } catch (IOException ex) {
+                logger.fatal("Failed handling chunk message", ex);
+              }
+            } else /* (nioMessageType == END_CHUNKED_MSG_TYPE) */ {
+              // logger.info("END_CHUNK msgId="+nioMsgId);
+              MsgDestreamer md = obtainMsgDestreamer(nioMsgId, remoteVersion);
+              this.owner.getConduit().getStats().incMessagesBeingReceived(md.size() == 0,
+                  nioMessageLength);
+              try {
+                md.addChunk(nioInputBuffer, nioMessageLength);
+              } catch (IOException ex) {
+                logger.fatal("Failed handling end chunk message", ex);
+              }
+              DistributionMessage msg = null;
+              int msgLength = 0;
+              String failureMsg = null;
+              Throwable failureEx = null;
+              int rpId = 0;
+              boolean interrupted = false;
+              try {
+                msg = md.getMessage();
+              } catch (ClassNotFoundException ex) {
+                this.owner.getConduit().getStats().decMessagesBeingReceived(md.size());
+                failureMsg = "ClassNotFound deserializing message";
+                failureEx = ex;
+                rpId = md.getRPid();
+                logger.fatal("ClassNotFound deserializing message: {}", ex.toString());
+              } catch (IOException ex) {
+                this.owner.getConduit().getStats().decMessagesBeingReceived(md.size());
+                failureMsg = "IOException deserializing message";
+                failureEx = ex;
+                rpId = md.getRPid();
+                logger.fatal("IOException deserializing message", failureEx);
+              } catch (InterruptedException ex) {
+                interrupted = true;
+                this.owner.getConduit().getCancelCriterion().checkCancelInProgress(ex);
+              } catch (VirtualMachineError err) {
+                SystemFailure.initiateFailure(err);
+                // If this ever returns, rethrow the error. We're poisoned
+                // now, so don't let this thread continue.
+                throw err;
+              } catch (Throwable ex) {
+                // Whenever you catch Error or Throwable, you must also
+                // catch VirtualMachineError (see above). However, there is
+                // _still_ a possibility that you are dealing with a cascading
+                // error condition, so you also need to check to see if the JVM
+                // is still usable:
+                SystemFailure.checkFailure();
+                this.owner.getConduit().getCancelCriterion().checkCancelInProgress(ex);
+                this.owner.getConduit().getStats().decMessagesBeingReceived(md.size());
+                failureMsg = "Unexpected failure deserializing message";
+                failureEx = ex;
+                rpId = md.getRPid();
+                logger.fatal("Unexpected failure deserializing message",
+                    failureEx);
+              } finally {
+                msgLength = md.size();
+                releaseMsgDestreamer(nioMsgId, md);
+                if (interrupted) {
+                  Thread.currentThread().interrupt();
+                }
+              }
+              if (msg != null) {
+                try {
+                  if (!dispatchMessage(msg, msgLength, directAck)) {
+                    directAck = false;
+                  }
+                } catch (MemberShunnedException e) {
+                  // not a member anymore - don't reply
+                  directAck = false;
+                } catch (Exception de) {
+                  this.owner.getConduit().getCancelCriterion().checkCancelInProgress(de);
+                  logger.fatal("Error dispatching message", de);
+                } catch (ThreadDeath td) {
+                  throw td;
+                } catch (VirtualMachineError err) {
+                  SystemFailure.initiateFailure(err);
+                  // If this ever returns, rethrow the error. We're poisoned
+                  // now, so don't let this thread continue.
+                  throw err;
+                } catch (Throwable t) {
+                  // Whenever you catch Error or Throwable, you must also
+                  // catch VirtualMachineError (see above). However, there is
+                  // _still_ a possibility that you are dealing with a cascading
+                  // error condition, so you also need to check to see if the JVM
+                  // is still usable:
+                  SystemFailure.checkFailure();
+                  logger.fatal("Throwable dispatching message", t);
+                }
+              } else if (failureEx != null) {
+                sendFailureReply(rpId, failureMsg, failureEx, directAck);
+              }
             }
           } else {
-            ByteBufferInputStream bbis = new ByteBufferInputStream(peerDataBuffer);
+            // read HANDSHAKE
+            ByteBufferInputStream bbis = new ByteBufferInputStream(nioInputBuffer);
             DataInputStream dis = new DataInputStream(bbis);
             if (!this.isReceiver) {
-              // we read the handshake and then stop processing since we don't want
-              // to process the input buffer anymore in a handshake thread
-              readHandshakeForSender(dis, peerDataBuffer);
-              return;
+              try {
+                this.replyCode = dis.readUnsignedByte();
+                if (this.replyCode == REPLY_CODE_OK_WITH_ASYNC_INFO) {
+                  this.asyncDistributionTimeout = dis.readInt();
+                  this.asyncQueueTimeout = dis.readInt();
+                  this.asyncMaxQueueSize = (long) dis.readInt() * (1024 * 1024);
+                  if (this.asyncDistributionTimeout != 0) {
+                    logger.info("{} async configuration received {}.",
+                        p2pReaderName(),
+                        " asyncDistributionTimeout=" + this.asyncDistributionTimeout
+                            + " asyncQueueTimeout=" + this.asyncQueueTimeout
+                            + " asyncMaxQueueSize="
+                            + (this.asyncMaxQueueSize / (1024 * 1024)));
+                  }
+                  // read the product version ordinal for on-the-fly serialization
+                  // transformations (for rolling upgrades)
+                  this.remoteVersion = Version.readVersion(dis, true);
+                }
+              } catch (Exception e) {
+                this.owner.getConduit().getCancelCriterion().checkCancelInProgress(e);
+                logger.fatal("Error deserializing P2P handshake reply", e);
+                this.readerShuttingDown = true;
+                requestClose("Error deserializing P2P handshake reply");
+                return;
+              } catch (ThreadDeath td) {
+                throw td;
+              } catch (VirtualMachineError err) {
+                SystemFailure.initiateFailure(err);
+                // If this ever returns, rethrow the error. We're poisoned
+                // now, so don't let this thread continue.
+                throw err;
+              } catch (Throwable t) {
+                // Whenever you catch Error or Throwable, you must also
+                // catch VirtualMachineError (see above). However, there is
+                // _still_ a possibility that you are dealing with a cascading
+                // error condition, so you also need to check to see if the JVM
+                // is still usable:
+                SystemFailure.checkFailure();
+                logger.fatal("Throwable deserializing P2P handshake reply",
+                    t);
+                this.readerShuttingDown = true;
+                requestClose("Throwable deserializing P2P handshake reply");
+                return;
+              }
+              if (this.replyCode != REPLY_CODE_OK
+                  && this.replyCode != REPLY_CODE_OK_WITH_ASYNC_INFO) {
+                String err =
+                    "Unknown handshake reply code: %s nioMessageLength: %s";
+                Object[] errArgs = new Object[] {Integer.valueOf(this.replyCode),
+                    Integer.valueOf(nioMessageLength)};
+                if (replyCode == 0 && logger.isDebugEnabled()) { // bug 37113
+                  logger.debug(
+                      String.format(err, errArgs) + " (peer probably departed ungracefully)");
+                } else {
+                  logger.fatal(err, errArgs);
+                }
+                this.readerShuttingDown = true;
+                requestClose(String.format(err, errArgs));
+                return;
+              }
+              notifyHandshakeWaiter(true);
             } else {
-              if (readHandshakeForReceiver(dis)) {
-                ioFilter.doneReading(peerDataBuffer);
+              try {
+                byte b = dis.readByte();
+                if (b != 0) {
+                  throw new IllegalStateException(
+                      String.format(
+                          "Detected old version (pre 5.0.1) of GemFire or non-GemFire during handshake due to initial byte being %s",
+                          new Byte(b)));
+                }
+                byte handshakeByte = dis.readByte();
+                if (handshakeByte != HANDSHAKE_VERSION) {
+                  throw new IllegalStateException(
+                      String.format(
+                          "Detected wrong version of GemFire product during handshake. Expected %s but found %s",
+
+                          new Object[] {new Byte(HANDSHAKE_VERSION), new Byte(handshakeByte)}));
+                }
+                InternalDistributedMember remote = DSFIDFactory.readInternalDistributedMember(dis);
+                setRemoteAddr(remote);
+                this.sharedResource = dis.readBoolean();
+                this.preserveOrder = dis.readBoolean();
+                this.uniqueId = dis.readLong();
+                // read the product version ordinal for on-the-fly serialization
+                // transformations (for rolling upgrades)
+                this.remoteVersion = Version.readVersion(dis, true);
+                int dominoNumber = 0;
+                if (this.remoteVersion == null
+                    || (this.remoteVersion.compareTo(Version.GFE_80) >= 0)) {
+                  dominoNumber = dis.readInt();
+                  if (this.sharedResource) {
+                    dominoNumber = 0;
+                  }
+                  dominoCount.set(dominoNumber);
+                  // this.senderName = dis.readUTF();
+                }
+                if (!this.sharedResource) {
+                  if (tipDomino()) {
+                    logger.info(
+                        "thread owned receiver forcing itself to send on thread owned sockets");
+                    // bug #49565 - if domino count is >= 2 use shared resources.
+                    // Also see DistributedCacheOperation#supportsDirectAck
+                  } else { // if (dominoNumber < 2) {
+                    ConnectionTable.threadWantsOwnResources();
+                    if (logger.isDebugEnabled()) {
+                      logger.debug(
+                          "thread-owned receiver with domino count of {} will prefer sending on thread-owned sockets",
+                          dominoNumber);
+                    }
+                    // } else {
+                    // ConnectionTable.threadWantsSharedResources();
+                  }
+                  this.conduit.getStats().incThreadOwnedReceivers(1L, dominoNumber);
+                  // Because this thread is not shared resource, it will be used for direct
+                  // ack. Direct ack messages can be large. This call will resize the send
+                  // buffer.
+                  setSendBufferSize(this.socket);
+                }
+                // String name = owner.getDM().getConfig().getName();
+                // if (name == null) {
+                // name = "pid="+OSProcess.getId();
+                // }
+                setThreadName(dominoNumber);
+              } catch (Exception e) {
+                this.owner.getConduit().getCancelCriterion().checkCancelInProgress(e); // bug 37101
+                logger.fatal("Error deserializing P2P handshake message", e);
+                this.readerShuttingDown = true;
+                requestClose("Error deserializing P2P handshake message");
+                return;
+              }
+              if (logger.isDebugEnabled()) {
+                logger.debug("P2P handshake remoteAddr is {}{}", this.remoteAddr,
+                    (this.remoteVersion != null ? " (" + this.remoteVersion + ')' : ""));
+              }
+              try {
+                String authInit = System.getProperty(
+                    DistributionConfigImpl.SECURITY_SYSTEM_PREFIX + SECURITY_PEER_AUTH_INIT);
+                boolean isSecure = authInit != null && authInit.length() != 0;
+
+                if (isSecure) {
+                  if (owner.getConduit().waitForMembershipCheck(this.remoteAddr)) {
+                    sendOKHandshakeReply(); // fix for bug 33224
+                    notifyHandshakeWaiter(true);
+                  } else {
+                    // ARB: check if we need notifyHandshakeWaiter() call.
+                    notifyHandshakeWaiter(false);
+                    logger.warn("{} timed out during a membership check.",
+                        p2pReaderName());
+                    return;
+                  }
+                } else {
+                  sendOKHandshakeReply(); // fix for bug 33224
+                  try {
+                    notifyHandshakeWaiter(true);
+                  } catch (Exception e) {
+                    logger.fatal("Uncaught exception from listener", e);
+                  }
+                }
+              } catch (IOException ex) {
+                final String err = "Failed sending handshake reply";
+                if (logger.isDebugEnabled()) {
+                  logger.debug(err, ex);
+                }
+                this.readerShuttingDown = true;
+                requestClose(err + ": " + ex);
                 return;
               }
             }
@@ -2971,405 +3846,21 @@ public class Connection implements Runnable {
             continue;
           }
           accessed();
-          peerDataBuffer.limit(oldLimit);
-          peerDataBuffer.position(startPos + messageLength);
+          nioInputBuffer.limit(oldLimit);
+          nioInputBuffer.position(startPos + nioMessageLength);
         } else {
           done = true;
-          if (getConduit().useSSL()) {
-            ioFilter.doneReading(peerDataBuffer);
-          } else {
-            compactOrResizeBuffer(messageLength);
-          }
+          compactOrResizeBuffer(nioMessageLength);
         }
       } else {
-        ioFilter.doneReading(peerDataBuffer);
         done = true;
-      }
-    }
-  }
-
-  private boolean readHandshakeForReceiver(DataInputStream dis) {
-    try {
-      byte b = dis.readByte();
-      if (b != 0) {
-        throw new IllegalStateException(
-            String.format(
-                "Detected old version (pre 5.0.1) of GemFire or non-GemFire during handshake due to initial byte being %s",
-                b));
-      }
-      byte handshakeByte = dis.readByte();
-      if (handshakeByte != HANDSHAKE_VERSION) {
-        throw new IllegalStateException(
-            String.format(
-                "Detected wrong version of GemFire product during handshake. Expected %s but found %s",
-                HANDSHAKE_VERSION, handshakeByte));
-      }
-      InternalDistributedMember remote = DSFIDFactory.readInternalDistributedMember(dis);
-      setRemoteAddr(remote);
-      this.sharedResource = dis.readBoolean();
-      this.preserveOrder = dis.readBoolean();
-      this.uniqueId = dis.readLong();
-      // read the product version ordinal for on-the-fly serialization
-      // transformations (for rolling upgrades)
-      this.remoteVersion = Version.readVersion(dis, true);
-      int dominoNumber = 0;
-      if (this.remoteVersion == null
-          || (this.remoteVersion.compareTo(Version.GFE_80) >= 0)) {
-        dominoNumber = dis.readInt();
-        if (this.sharedResource) {
-          dominoNumber = 0;
-        }
-        dominoCount.set(dominoNumber);
-        // this.senderName = dis.readUTF();
-      }
-      if (!this.sharedResource) {
-        if (tipDomino()) {
-          logger.info(
-              "thread owned receiver forcing itself to send on thread owned sockets");
-          // bug #49565 - if domino count is >= 2 use shared resources.
-          // Also see DistributedCacheOperation#supportsDirectAck
-        } else { // if (dominoNumber < 2) {
-          ConnectionTable.threadWantsOwnResources();
-          if (logger.isDebugEnabled()) {
-            logger.debug(
-                "thread-owned receiver with domino count of {} will prefer sending on thread-owned sockets",
-                dominoNumber);
-          }
-          // } else {
-          // ConnectionTable.threadWantsSharedResources();
-        }
-        this.conduit.getStats().incThreadOwnedReceivers(1L, dominoNumber);
-        // Because this thread is not shared resource, it will be used for direct
-        // ack. Direct ack messages can be large. This call will resize the send
-        // buffer.
-        setSendBufferSize(this.socket);
-      }
-      // String name = owner.getDM().getConfig().getName();
-      // if (name == null) {
-      // name = "pid="+OSProcess.getId();
-      // }
-      setThreadName(dominoNumber);
-    } catch (Exception e) {
-      this.owner.getConduit().getCancelCriterion().checkCancelInProgress(e); // bug 37101
-      logger.fatal("Error deserializing P2P handshake message", e);
-      this.readerShuttingDown = true;
-      requestClose("Error deserializing P2P handshake message");
-      return true;
-    }
-    if (logger.isDebugEnabled()) {
-      logger.debug("P2P handshake remoteAddr is {}{}", this.remoteAddr,
-          (this.remoteVersion != null ? " (" + this.remoteVersion + ')' : ""));
-    }
-    try {
-      String authInit = System.getProperty(
-          DistributionConfigImpl.SECURITY_SYSTEM_PREFIX + SECURITY_PEER_AUTH_INIT);
-      boolean isSecure = authInit != null && authInit.length() != 0;
-
-      if (isSecure) {
-        if (owner.getConduit().waitForMembershipCheck(this.remoteAddr)) {
-          sendOKHandshakeReply(); // fix for bug 33224
-          notifyHandshakeWaiter(true);
+        if (nioInputBuffer.position() != 0) {
+          nioInputBuffer.compact();
         } else {
-          // ARB: check if we need notifyHandshakeWaiter() call.
-          notifyHandshakeWaiter(false);
-          logger.warn("{} timed out during a membership check.",
-              p2pReaderName());
-          return true;
-        }
-      } else {
-        sendOKHandshakeReply(); // fix for bug 33224
-        try {
-          notifyHandshakeWaiter(true);
-        } catch (Exception e) {
-          logger.fatal("Uncaught exception from listener", e);
-        }
-      }
-      this.finishedConnecting = true;
-    } catch (IOException ex) {
-      final String err = "Failed sending handshake reply";
-      if (logger.isDebugEnabled()) {
-        logger.debug(err, ex);
-      }
-      this.readerShuttingDown = true;
-      requestClose(err + ": " + ex);
-      return true;
-    }
-    return false;
-  }
-
-  private boolean readMessageHeader(ByteBuffer peerDataBuffer) throws IOException {
-    int headerStartPos = peerDataBuffer.position();
-    messageLength = peerDataBuffer.getInt();
-    /* nioMessageVersion = */
-    calcHdrVersion(messageLength);
-    messageLength = calcMsgByteSize(messageLength);
-    messageType = peerDataBuffer.get();
-    messageId = peerDataBuffer.getShort();
-    directAck = (messageType & DIRECT_ACK_BIT) != 0;
-    if (directAck) {
-      messageType &= ~DIRECT_ACK_BIT; // clear the ack bit
-    }
-    // Following validation fixes bug 31145
-    if (!validMsgType(messageType)) {
-      Integer nioMessageTypeInteger = (int) messageType;
-      logger.fatal("Unknown P2P message type: {}", nioMessageTypeInteger);
-      this.readerShuttingDown = true;
-      requestClose(String.format("Unknown P2P message type: %s",
-          nioMessageTypeInteger));
-      return true;
-    }
-    lengthSet = true;
-    // keep the header "in" the buffer until we have read the entire msg.
-    // Trust me: this will reduce copying on large messages.
-    peerDataBuffer.position(headerStartPos);
-    return false;
-  }
-
-  private void readMessage(ByteBuffer peerDataBuffer) {
-    if (messageType == NORMAL_MSG_TYPE) {
-      this.owner.getConduit().getStats().incMessagesBeingReceived(true, messageLength);
-      ByteBufferInputStream bbis =
-          remoteVersion == null ? new ByteBufferInputStream(peerDataBuffer)
-              : new VersionedByteBufferInputStream(peerDataBuffer, remoteVersion);
-      DistributionMessage msg;
-      try {
-        ReplyProcessor21.initMessageRPId();
-        // add serialization stats
-        long startSer = this.owner.getConduit().getStats().startMsgDeserialization();
-        int startingPosition = peerDataBuffer.position();
-        try {
-          msg = (DistributionMessage) InternalDataSerializer.readDSFID(bbis);
-        } catch (SerializationException e) {
-          logger.info("input buffer starting position {} "
-              + " current position {} limit {} capacity {} message length {}",
-              startingPosition, peerDataBuffer.position(), peerDataBuffer.limit(),
-              peerDataBuffer.capacity(), messageLength);
-          throw e;
-        }
-        this.owner.getConduit().getStats().endMsgDeserialization(startSer);
-        if (bbis.available() != 0) {
-          logger.warn("Message deserialization of {} did not read {} bytes.",
-              msg, bbis.available());
-        }
-        try {
-          if (!dispatchMessage(msg, messageLength, directAck)) {
-            directAck = false;
-          }
-        } catch (MemberShunnedException e) {
-          directAck = false; // don't respond (bug39117)
-        } catch (Exception de) {
-          this.owner.getConduit().getCancelCriterion().checkCancelInProgress(de);
-          logger.fatal("Error dispatching message", de);
-        } catch (ThreadDeath td) {
-          throw td;
-        } catch (VirtualMachineError err) {
-          SystemFailure.initiateFailure(err);
-          // If this ever returns, rethrow the error. We're poisoned
-          // now, so don't let this thread continue.
-          throw err;
-        } catch (Throwable t) {
-          // Whenever you catch Error or Throwable, you must also
-          // catch VirtualMachineError (see above). However, there is
-          // _still_ a possibility that you are dealing with a cascading
-          // error condition, so you also need to check to see if the JVM
-          // is still usable:
-          SystemFailure.checkFailure();
-          logger.fatal("Throwable dispatching message", t);
-        }
-      } catch (VirtualMachineError err) {
-        SystemFailure.initiateFailure(err);
-        // If this ever returns, rethrow the error. We're poisoned
-        // now, so don't let this thread continue.
-        throw err;
-      } catch (Throwable t) {
-        // Whenever you catch Error or Throwable, you must also
-        // catch VirtualMachineError (see above). However, there is
-        // _still_ a possibility that you are dealing with a cascading
-        // error condition, so you also need to check to see if the JVM
-        // is still usable:
-        SystemFailure.checkFailure();
-        sendFailureReply(ReplyProcessor21.getMessageRPId(),
-            "Error deserializing message", t,
-            directAck);
-        if (t instanceof ThreadDeath) {
-          throw (ThreadDeath) t;
-        }
-        if (t instanceof CancelException) {
-          if (!(t instanceof CacheClosedException)) {
-            // Just log a message if we had trouble deserializing due to
-            // CacheClosedException; see bug 43543
-            throw (CancelException) t;
-          }
-        }
-        logger.fatal("Error deserializing message", t);
-      } finally {
-        ReplyProcessor21.clearMessageRPId();
-      }
-    } else if (messageType == CHUNKED_MSG_TYPE) {
-      MsgDestreamer md = obtainMsgDestreamer(messageId, remoteVersion);
-      this.owner.getConduit().getStats().incMessagesBeingReceived(md.size() == 0,
-          messageLength);
-      try {
-        md.addChunk(peerDataBuffer, messageLength);
-      } catch (IOException ex) {
-      }
-    } else /* (nioMessageType == END_CHUNKED_MSG_TYPE) */ {
-      MsgDestreamer md = obtainMsgDestreamer(messageId, remoteVersion);
-      this.owner.getConduit().getStats().incMessagesBeingReceived(md.size() == 0,
-          messageLength);
-      try {
-        md.addChunk(peerDataBuffer, messageLength);
-      } catch (IOException ex) {
-        logger.fatal("Failed handling end chunk message", ex);
-      }
-      DistributionMessage msg = null;
-      int msgLength;
-      String failureMsg = null;
-      Throwable failureEx = null;
-      int rpId = 0;
-      boolean interrupted = false;
-      try {
-        msg = md.getMessage();
-      } catch (ClassNotFoundException ex) {
-        this.owner.getConduit().getStats().decMessagesBeingReceived(md.size());
-        failureMsg = "ClassNotFound deserializing message";
-        failureEx = ex;
-        rpId = md.getRPid();
-        logger.fatal("ClassNotFound deserializing message: {}", ex.toString());
-      } catch (IOException ex) {
-        this.owner.getConduit().getStats().decMessagesBeingReceived(md.size());
-        failureMsg = "IOException deserializing message";
-        failureEx = ex;
-        rpId = md.getRPid();
-        logger.fatal("IOException deserializing message", failureEx);
-      } catch (InterruptedException ex) {
-        interrupted = true;
-        this.owner.getConduit().getCancelCriterion().checkCancelInProgress(ex);
-      } catch (VirtualMachineError err) {
-        SystemFailure.initiateFailure(err);
-        // If this ever returns, rethrow the error. We're poisoned
-        // now, so don't let this thread continue.
-        throw err;
-      } catch (Throwable ex) {
-        // Whenever you catch Error or Throwable, you must also
-        // catch VirtualMachineError (see above). However, there is
-        // _still_ a possibility that you are dealing with a cascading
-        // error condition, so you also need to check to see if the JVM
-        // is still usable:
-        SystemFailure.checkFailure();
-        this.owner.getConduit().getCancelCriterion().checkCancelInProgress(ex);
-        this.owner.getConduit().getStats().decMessagesBeingReceived(md.size());
-        failureMsg = "Unexpected failure deserializing message";
-        failureEx = ex;
-        rpId = md.getRPid();
-        logger.fatal("Unexpected failure deserializing message",
-            failureEx);
-      } finally {
-        msgLength = md.size();
-        releaseMsgDestreamer(messageId, md);
-        if (interrupted) {
-          Thread.currentThread().interrupt();
+          nioInputBuffer.position(nioInputBuffer.limit());
+          nioInputBuffer.limit(nioInputBuffer.capacity());
         }
       }
-      if (msg != null) {
-        try {
-          if (!dispatchMessage(msg, msgLength, directAck)) {
-            directAck = false;
-          }
-        } catch (MemberShunnedException e) {
-          // not a member anymore - don't reply
-          directAck = false;
-        } catch (Exception de) {
-          this.owner.getConduit().getCancelCriterion().checkCancelInProgress(de);
-          logger.fatal("Error dispatching message", de);
-        } catch (ThreadDeath td) {
-          throw td;
-        } catch (VirtualMachineError err) {
-          SystemFailure.initiateFailure(err);
-          // If this ever returns, rethrow the error. We're poisoned
-          // now, so don't let this thread continue.
-          throw err;
-        } catch (Throwable t) {
-          // Whenever you catch Error or Throwable, you must also
-          // catch VirtualMachineError (see above). However, there is
-          // _still_ a possibility that you are dealing with a cascading
-          // error condition, so you also need to check to see if the JVM
-          // is still usable:
-          SystemFailure.checkFailure();
-          logger.fatal("Throwable dispatching message", t);
-        }
-      } else if (failureEx != null) {
-        sendFailureReply(rpId, failureMsg, failureEx, directAck);
-      }
-    }
-  }
-
-  private void readHandshakeForSender(DataInputStream dis, ByteBuffer peerDataBuffer) {
-    try {
-      this.replyCode = dis.readUnsignedByte();
-      switch (replyCode) {
-        case REPLY_CODE_OK:
-          ioFilter.doneReading(peerDataBuffer);
-          notifyHandshakeWaiter(true);
-          return;
-        case REPLY_CODE_OK_WITH_ASYNC_INFO:
-          this.asyncDistributionTimeout = dis.readInt();
-          this.asyncQueueTimeout = dis.readInt();
-          this.asyncMaxQueueSize = (long) dis.readInt() * (1024 * 1024);
-          if (this.asyncDistributionTimeout != 0) {
-            logger.info("{} async configuration received {}.",
-                p2pReaderName(),
-                " asyncDistributionTimeout=" + this.asyncDistributionTimeout
-                    + " asyncQueueTimeout=" + this.asyncQueueTimeout
-                    + " asyncMaxQueueSize="
-                    + (this.asyncMaxQueueSize / (1024 * 1024)));
-          }
-          // read the product version ordinal for on-the-fly serialization
-          // transformations (for rolling upgrades)
-          this.remoteVersion = Version.readVersion(dis, true);
-          ioFilter.doneReading(peerDataBuffer);
-          notifyHandshakeWaiter(true);
-          return;
-        default:
-          String err =
-              "Unknown handshake reply code: %s nioMessageLength: %s";
-          Object[] errArgs = new Object[] {this.replyCode,
-              messageLength};
-          if (replyCode == 0 && logger.isDebugEnabled()) { // bug 37113
-            logger.debug(
-                String.format(err, errArgs) + " (peer probably departed ungracefully)");
-          } else {
-            logger.fatal(err, errArgs);
-          }
-          this.readerShuttingDown = true;
-          requestClose(String.format(err, errArgs));
-          return;
-      }
-    } catch (Exception e) {
-      this.owner.getConduit().getCancelCriterion().checkCancelInProgress(e);
-      logger.fatal("Error deserializing P2P handshake reply", e);
-      this.readerShuttingDown = true;
-      requestClose("Error deserializing P2P handshake reply");
-      return;
-    } catch (ThreadDeath td) {
-      throw td;
-    } catch (VirtualMachineError err) {
-      SystemFailure.initiateFailure(err);
-      // If this ever returns, rethrow the error. We're poisoned
-      // now, so don't let this thread continue.
-      throw err;
-    } catch (Throwable t) {
-      // Whenever you catch Error or Throwable, you must also
-      // catch VirtualMachineError (see above). However, there is
-      // _still_ a possibility that you are dealing with a cascading
-      // error condition, so you also need to check to see if the JVM
-      // is still usable:
-      SystemFailure.checkFailure();
-      logger.fatal("Throwable deserializing P2P handshake reply",
-          t);
-      this.readerShuttingDown = true;
-      requestClose("Throwable deserializing P2P handshake reply");
-      return;
     }
   }
 
@@ -3381,28 +3872,28 @@ public class Connection implements Runnable {
   }
 
   private void compactOrResizeBuffer(int messageLength) {
-    final int oldBufferSize = inputBuffer.capacity();
+    final int oldBufferSize = nioInputBuffer.capacity();
     final DMStats stats = this.owner.getConduit().getStats();
     int allocSize = messageLength + MSG_HEADER_BYTES;
     if (oldBufferSize < allocSize) {
       // need a bigger buffer
       logger.info("Allocating larger network read buffer, new size is {} old size was {}.",
-          allocSize, oldBufferSize);
-      ByteBuffer oldBuffer = inputBuffer;
-      inputBuffer = Buffers.acquireReceiveBuffer(allocSize, stats);
+          Integer.valueOf(allocSize), Integer.valueOf(oldBufferSize));
+      ByteBuffer oldBuffer = nioInputBuffer;
+      nioInputBuffer = Buffers.acquireReceiveBuffer(allocSize, stats);
 
       if (oldBuffer != null) {
         int oldByteCount = oldBuffer.remaining();
-        inputBuffer.put(oldBuffer);
-        inputBuffer.position(oldByteCount);
+        nioInputBuffer.put(oldBuffer);
+        nioInputBuffer.position(oldByteCount);
         Buffers.releaseReceiveBuffer(oldBuffer, stats);
       }
     } else {
-      if (inputBuffer.position() != 0) {
-        inputBuffer.compact();
+      if (nioInputBuffer.position() != 0) {
+        nioInputBuffer.compact();
       } else {
-        inputBuffer.position(inputBuffer.limit());
-        inputBuffer.limit(inputBuffer.capacity());
+        nioInputBuffer.position(nioInputBuffer.limit());
+        nioInputBuffer.limit(nioInputBuffer.capacity());
       }
     }
   }
@@ -3438,11 +3929,11 @@ public class Connection implements Runnable {
     return result;
   }
 
-  boolean isSocketClosed() {
+  public boolean isSocketClosed() {
     return this.socket.isClosed() || !this.socket.isConnected();
   }
 
-  boolean isReceiverStopped() {
+  public boolean isReceiverStopped() {
     return this.stopped;
   }
 
@@ -3465,7 +3956,7 @@ public class Connection implements Runnable {
   /**
    * Return the version of the member on the other side of this connection.
    */
-  Version getRemoteVersion() {
+  public Version getRemoteVersion() {
     return this.remoteVersion;
   }
 
@@ -3486,14 +3977,14 @@ public class Connection implements Runnable {
    * @return true if the connection was initiated here
    * @since GemFire 5.1
    */
-  boolean getOriginatedHere() {
+  protected boolean getOriginatedHere() {
     return !this.isReceiver;
   }
 
   /**
    * answers whether this connection is used for ordered message delivery
    */
-  boolean getPreserveOrder() {
+  protected boolean getPreserveOrder() {
     return preserveOrder;
   }
 
@@ -3507,14 +3998,14 @@ public class Connection implements Runnable {
   /**
    * answers the number of messages received by this connection
    */
-  long getMessagesReceived() {
+  protected long getMessagesReceived() {
     return messagesReceived;
   }
 
   /**
    * answers the number of messages sent on this connection
    */
-  long getMessagesSent() {
+  protected long getMessagesSent() {
     return messagesSent;
   }
 
@@ -3567,4 +4058,30 @@ public class Connection implements Runnable {
     releaseSendPermission();
   }
 
+  boolean nioChecked;
+  boolean useNIO;
+
+  private boolean useNIO() {
+    if (TCPConduit.useSSL) {
+      return false;
+    }
+    if (this.nioChecked) {
+      return this.useNIO;
+    }
+    this.nioChecked = true;
+    this.useNIO = this.owner.getConduit().useNIO();
+    if (!this.useNIO) {
+      return false;
+    }
+    // JDK bug 6230761 - NIO can't be used with IPv6 on Windows
+    if (this.socket != null && (this.socket.getInetAddress() instanceof Inet6Address)) {
+      String os = System.getProperty("os.name");
+      if (os != null) {
+        if (os.contains("Windows")) {
+          this.useNIO = false;
+        }
+      }
+    }
+    return this.useNIO;
+  }
 }
diff --git a/geode-core/src/main/java/org/apache/geode/internal/tcp/ConnectionTable.java b/geode-core/src/main/java/org/apache/geode/internal/tcp/ConnectionTable.java
index f3a4432..a8ecb21 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/tcp/ConnectionTable.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/tcp/ConnectionTable.java
@@ -398,9 +398,6 @@ public class ConnectionTable {
     } // synchronized
 
     if (pc != null) {
-      if (logger.isDebugEnabled()) {
-        logger.debug("created PendingConnection {}", pc);
-      }
       result = handleNewPendingConnection(id, true /* fixes bug 43386 */, preserveOrder, m, pc,
           startTime, ackTimeout, ackSATimeout);
       if (!preserveOrder && scheduleTimeout) {
@@ -1184,11 +1181,9 @@ public class ConnectionTable {
         targetMember = this.id;
       }
 
-      int attempt = 0;
       for (;;) {
-        if (!this.pending) {
+        if (!this.pending)
           break;
-        }
         getConduit().getCancelCriterion().checkCancelInProgress(null);
 
         // wait a little bit...
@@ -1230,8 +1225,7 @@ public class ConnectionTable {
         e = m.get(this.id);
         // }
         if (e == this) {
-          attempt += 1;
-          if (logger.isDebugEnabled() && (attempt % 20 == 1)) {
+          if (logger.isDebugEnabled()) {
             logger.debug("Waiting for pending connection to complete: {} connection to {}; {}",
                 ((this.preserveOrder) ? "ordered" : "unordered"), this.id, this);
           }
@@ -1259,10 +1253,6 @@ public class ConnectionTable {
       return this.conn;
 
     }
-
-    public String toString() {
-      return super.toString() + " created by " + connectingThread.getName();
-    }
   }
 
 
diff --git a/geode-core/src/main/java/org/apache/geode/internal/tcp/MsgDestreamer.java b/geode-core/src/main/java/org/apache/geode/internal/tcp/MsgDestreamer.java
index 2a1db17..4c74c1b 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/tcp/MsgDestreamer.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/tcp/MsgDestreamer.java
@@ -143,6 +143,21 @@ public class MsgDestreamer {
   }
 
   /**
+   * Adds a chunk to be deserialize
+   *
+   * @param b a byte array contains the bytes of the chunk
+   */
+  public void addChunk(byte[] b) throws IOException {
+    // if this destreamer has failed or this chunk is empty just return
+    if (this.failure == null && b != null && b.length > 0) {
+      // logit("addChunk length=" + b.length);
+      ByteBuffer bb = ByteBuffer.wrap(b);
+      this.t.addChunk(bb, b.length);
+      this.size += b.length;
+    }
+  }
+
+  /**
    * Returns the number of bytes added to this destreamer.
    */
   public int size() {
diff --git a/geode-core/src/main/java/org/apache/geode/internal/tcp/MsgOutputStream.java b/geode-core/src/main/java/org/apache/geode/internal/tcp/MsgOutputStream.java
index 2d767b8..6c10ea7 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/tcp/MsgOutputStream.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/tcp/MsgOutputStream.java
@@ -22,7 +22,6 @@ import java.nio.ByteBuffer;
 import org.apache.geode.DataSerializer;
 import org.apache.geode.internal.InternalDataSerializer;
 import org.apache.geode.internal.ObjToByteArraySerializer;
-import org.apache.geode.internal.net.Buffers;
 
 /**
  * MsgOutputStream should no longer be used except in Connection to do the handshake. Otherwise
@@ -38,7 +37,7 @@ public class MsgOutputStream extends OutputStream implements ObjToByteArraySeria
    * The caller of this constructor is responsible for managing the allocated instance.
    */
   public MsgOutputStream(int allocSize) {
-    if (Buffers.useDirectBuffers) {
+    if (TCPConduit.useDirectBuffers) {
       this.buffer = ByteBuffer.allocateDirect(allocSize);
     } else {
       this.buffer = ByteBuffer.allocate(allocSize);
diff --git a/geode-core/src/main/java/org/apache/geode/internal/tcp/MsgReader.java b/geode-core/src/main/java/org/apache/geode/internal/tcp/MsgReader.java
index adf0305..45c9e98 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/tcp/MsgReader.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/tcp/MsgReader.java
@@ -15,77 +15,47 @@
 package org.apache.geode.internal.tcp;
 
 import java.io.IOException;
-import java.nio.BufferUnderflowException;
 import java.nio.ByteBuffer;
 
-import org.apache.logging.log4j.Logger;
-
 import org.apache.geode.distributed.internal.DMStats;
 import org.apache.geode.distributed.internal.DistributionMessage;
 import org.apache.geode.distributed.internal.ReplyProcessor21;
-import org.apache.geode.internal.Assert;
 import org.apache.geode.internal.InternalDataSerializer;
 import org.apache.geode.internal.Version;
-import org.apache.geode.internal.logging.LogService;
-import org.apache.geode.internal.net.Buffers;
-import org.apache.geode.internal.net.NioFilter;
 
 /**
  * This class is currently used for reading direct ack responses It should probably be used for all
  * of the reading done in Connection.
  *
  */
-public class MsgReader {
-  private static final Logger logger = LogService.getLogger();
-
+public abstract class MsgReader {
   protected final Connection conn;
   protected final Header header = new Header();
-  private final NioFilter ioFilter;
-  private ByteBuffer peerNetData;
-  private final ByteBufferInputStream byteBufferInputStream;
-
+  private final ByteBufferInputStream bbis;
 
-
-  MsgReader(Connection conn, NioFilter nioFilter, ByteBuffer peerNetData, Version version) {
+  public MsgReader(Connection conn, Version version) {
     this.conn = conn;
-    this.ioFilter = nioFilter;
-    this.peerNetData = peerNetData;
-    if (conn.getConduit().useSSL()) {
-      ByteBuffer buffer = ioFilter.getUnwrappedBuffer(peerNetData);
-      buffer.position(0).limit(0);
-    }
-    this.byteBufferInputStream =
+    this.bbis =
         version == null ? new ByteBufferInputStream() : new VersionedByteBufferInputStream(version);
   }
 
-  Header readHeader() throws IOException {
-    ByteBuffer unwrappedBuffer = readAtLeast(Connection.MSG_HEADER_BYTES);
-
-    Assert.assertTrue(unwrappedBuffer.remaining() >= Connection.MSG_HEADER_BYTES);
-
-    int position = unwrappedBuffer.position();
-    int limit = unwrappedBuffer.limit();
-
-    try {
-      int nioMessageLength = unwrappedBuffer.getInt();
-      /* nioMessageVersion = */
-      Connection.calcHdrVersion(nioMessageLength);
-      nioMessageLength = Connection.calcMsgByteSize(nioMessageLength);
-      byte nioMessageType = unwrappedBuffer.get();
-      short nioMsgId = unwrappedBuffer.getShort();
-
-      boolean directAck = (nioMessageType & Connection.DIRECT_ACK_BIT) != 0;
-      if (directAck) {
-        nioMessageType &= ~Connection.DIRECT_ACK_BIT; // clear the ack bit
-      }
-
-      header.setFields(nioMessageLength, nioMessageType, nioMsgId);
-
-      return header;
-    } catch (BufferUnderflowException e) {
-      throw e;
+  public Header readHeader() throws IOException {
+    ByteBuffer nioInputBuffer = readAtLeast(Connection.MSG_HEADER_BYTES);
+    int nioMessageLength = nioInputBuffer.getInt();
+    /* nioMessageVersion = */ Connection.calcHdrVersion(nioMessageLength);
+    nioMessageLength = Connection.calcMsgByteSize(nioMessageLength);
+    byte nioMessageType = nioInputBuffer.get();
+    short nioMsgId = nioInputBuffer.getShort();
+    boolean directAck = (nioMessageType & Connection.DIRECT_ACK_BIT) != 0;
+    if (directAck) {
+      // logger.info("DEBUG: msg from " + getRemoteAddress() + " is direct ack" );
+      nioMessageType &= ~Connection.DIRECT_ACK_BIT; // clear the ack bit
     }
 
+    header.nioMessageLength = nioMessageLength;
+    header.nioMessageType = nioMessageType;
+    header.nioMsgId = nioMsgId;
+    return header;
   }
 
   /**
@@ -93,76 +63,60 @@ public class MsgReader {
    *
    * @return the message, or null if we only received a chunk of the message
    */
-  DistributionMessage readMessage(Header header)
-      throws IOException, ClassNotFoundException {
-    ByteBuffer nioInputBuffer = readAtLeast(header.messageLength);
-    Assert.assertTrue(nioInputBuffer.remaining() >= header.messageLength);
-    this.getStats().incMessagesBeingReceived(true, header.messageLength);
+  public DistributionMessage readMessage(Header header)
+      throws IOException, ClassNotFoundException, InterruptedException {
+    ByteBuffer nioInputBuffer = readAtLeast(header.nioMessageLength);
+    this.getStats().incMessagesBeingReceived(true, header.nioMessageLength);
     long startSer = this.getStats().startMsgDeserialization();
-    int position = nioInputBuffer.position();
-    int limit = nioInputBuffer.limit();
     try {
-      byteBufferInputStream.setBuffer(nioInputBuffer);
+      bbis.setBuffer(nioInputBuffer);
+      DistributionMessage msg = null;
       ReplyProcessor21.initMessageRPId();
-      // dumpState("readMessage ready to deserialize", null, nioInputBuffer, position, limit);
-      return (DistributionMessage) InternalDataSerializer.readDSFID(byteBufferInputStream);
-    } catch (RuntimeException e) {
-      throw e;
-    } catch (IOException e) {
-      throw e;
+      // add serialization stats
+      msg = (DistributionMessage) InternalDataSerializer.readDSFID(bbis);
+      return msg;
     } finally {
       this.getStats().endMsgDeserialization(startSer);
-      this.getStats().decMessagesBeingReceived(header.messageLength);
-      ioFilter.doneReading(nioInputBuffer);
+      this.getStats().decMessagesBeingReceived(header.nioMessageLength);
     }
   }
 
-  void readChunk(Header header, MsgDestreamer md)
-      throws IOException {
-    ByteBuffer unwrappedBuffer = readAtLeast(header.messageLength);
-    this.getStats().incMessagesBeingReceived(md.size() == 0, header.messageLength);
-    md.addChunk(unwrappedBuffer, header.messageLength);
-    // show that the bytes have been consumed by adjusting the buffer's position
-    unwrappedBuffer.position(unwrappedBuffer.position() + header.messageLength);
+  public void readChunk(Header header, MsgDestreamer md)
+      throws IOException, ClassNotFoundException, InterruptedException {
+    ByteBuffer nioInputBuffer = readAtLeast(header.nioMessageLength);
+    this.getStats().incMessagesBeingReceived(md.size() == 0, header.nioMessageLength);
+    md.addChunk(nioInputBuffer, header.nioMessageLength);
   }
 
+  public abstract ByteBuffer readAtLeast(int bytes) throws IOException;
 
-
-  private ByteBuffer readAtLeast(int bytes) throws IOException {
-    peerNetData = ioFilter.ensureWrappedCapacity(bytes, peerNetData,
-        Buffers.BufferType.TRACKED_RECEIVER, getStats());
-    return ioFilter.readAtLeast(conn.getSocket().getChannel(), bytes, peerNetData, getStats());
-  }
-
-
-
-  private DMStats getStats() {
+  protected DMStats getStats() {
     return conn.getConduit().getStats();
   }
 
   public static class Header {
 
-    private int messageLength;
-    private byte messageType;
-    private short messageId;
+    int nioMessageLength;
+    byte nioMessageType;
+    short nioMsgId;
 
-    public void setFields(int nioMessageLength, byte nioMessageType, short nioMsgId) {
-      messageLength = nioMessageLength;
-      messageType = nioMessageType;
-      messageId = nioMsgId;
-    }
+    public Header() {}
 
-    int getMessageLength() {
-      return messageLength;
+    public int getNioMessageLength() {
+      return nioMessageLength;
     }
 
-    byte getMessageType() {
-      return messageType;
+    public byte getNioMessageType() {
+      return nioMessageType;
     }
 
-    short getMessageId() {
-      return messageId;
+    public short getNioMessageId() {
+      return nioMsgId;
     }
+
+
   }
 
+  public void close() {}
+
 }
diff --git a/geode-core/src/main/java/org/apache/geode/internal/tcp/MsgStreamer.java b/geode-core/src/main/java/org/apache/geode/internal/tcp/MsgStreamer.java
index 22a385b..40fd31a 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/tcp/MsgStreamer.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/tcp/MsgStreamer.java
@@ -38,7 +38,6 @@ import org.apache.geode.internal.InternalDataSerializer;
 import org.apache.geode.internal.ObjToByteArraySerializer;
 import org.apache.geode.internal.Version;
 import org.apache.geode.internal.logging.LogService;
-import org.apache.geode.internal.net.Buffers;
 
 /**
  * <p>
diff --git a/geode-core/src/main/java/org/apache/geode/internal/tcp/NIOMsgReader.java b/geode-core/src/main/java/org/apache/geode/internal/tcp/NIOMsgReader.java
new file mode 100644
index 0000000..a4e35a4
--- /dev/null
+++ b/geode-core/src/main/java/org/apache/geode/internal/tcp/NIOMsgReader.java
@@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.internal.tcp;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.net.SocketException;
+import java.nio.ByteBuffer;
+import java.nio.channels.SocketChannel;
+
+import org.apache.geode.internal.Version;
+
+/**
+ * A message reader which reads from the socket using (blocking) nio.
+ *
+ */
+public class NIOMsgReader extends MsgReader {
+
+  /** the buffer used for NIO message receipt */
+  private ByteBuffer nioInputBuffer;
+  private final SocketChannel inputChannel;
+  private int lastReadPosition;
+  private int lastProcessedPosition;
+
+  public NIOMsgReader(Connection conn, Version version) throws SocketException {
+    super(conn, version);
+    this.inputChannel = conn.getSocket().getChannel();
+  }
+
+
+  @Override
+  public ByteBuffer readAtLeast(int bytes) throws IOException {
+    ensureCapacity(bytes);
+
+    while (lastReadPosition - lastProcessedPosition < bytes) {
+      nioInputBuffer.limit(nioInputBuffer.capacity());
+      nioInputBuffer.position(lastReadPosition);
+      int bytesRead = inputChannel.read(nioInputBuffer);
+      if (bytesRead < 0) {
+        throw new EOFException();
+      }
+      lastReadPosition = nioInputBuffer.position();
+    }
+    nioInputBuffer.limit(lastProcessedPosition + bytes);
+    nioInputBuffer.position(lastProcessedPosition);
+    lastProcessedPosition = nioInputBuffer.limit();
+
+    return nioInputBuffer;
+  }
+
+  /** gets the buffer for receiving message length bytes */
+  protected void ensureCapacity(int bufferSize) {
+    // Ok, so we have a buffer that's big enough
+    if (nioInputBuffer != null && nioInputBuffer.capacity() > bufferSize) {
+      if (nioInputBuffer.capacity() - lastProcessedPosition < bufferSize) {
+        nioInputBuffer.limit(lastReadPosition);
+        nioInputBuffer.position(lastProcessedPosition);
+        nioInputBuffer.compact();
+        lastReadPosition = nioInputBuffer.position();
+        lastProcessedPosition = 0;
+      }
+      return;
+    }
+
+    // otherwise, we have no buffer to a buffer that's too small
+
+    if (nioInputBuffer == null) {
+      int allocSize = conn.getReceiveBufferSize();
+      if (allocSize == -1) {
+        allocSize = conn.getConduit().tcpBufferSize;
+      }
+      if (allocSize > bufferSize) {
+        bufferSize = allocSize;
+      }
+    }
+    ByteBuffer oldBuffer = nioInputBuffer;
+    nioInputBuffer = Buffers.acquireReceiveBuffer(bufferSize, getStats());
+
+    if (oldBuffer != null) {
+      oldBuffer.limit(lastReadPosition);
+      oldBuffer.position(lastProcessedPosition);
+      nioInputBuffer.put(oldBuffer);
+      lastReadPosition = nioInputBuffer.position(); // fix for 45064
+      lastProcessedPosition = 0;
+      Buffers.releaseReceiveBuffer(oldBuffer, getStats());
+    }
+  }
+
+  @Override
+  public void close() {
+    ByteBuffer tmp = this.nioInputBuffer;
+    if (tmp != null) {
+      this.nioInputBuffer = null;
+      Buffers.releaseReceiveBuffer(tmp, getStats());
+    }
+  }
+}
diff --git a/geode-core/src/main/java/org/apache/geode/internal/tcp/PeerConnectionFactory.java b/geode-core/src/main/java/org/apache/geode/internal/tcp/OioMsgReader.java
similarity index 63%
copy from geode-core/src/main/java/org/apache/geode/internal/tcp/PeerConnectionFactory.java
copy to geode-core/src/main/java/org/apache/geode/internal/tcp/OioMsgReader.java
index 1388061..3ec4298 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/tcp/PeerConnectionFactory.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/tcp/OioMsgReader.java
@@ -12,22 +12,28 @@
  * or implied. See the License for the specific language governing permissions and limitations under
  * the License.
  */
-
 package org.apache.geode.internal.tcp;
 
 import java.io.IOException;
-import java.net.Socket;
+import java.nio.ByteBuffer;
+
+import org.apache.geode.internal.Version;
 
+/**
+ * A message reader which reads from the socket using the old io.
+ *
+ */
+public class OioMsgReader extends MsgReader {
 
-public class PeerConnectionFactory {
-  /**
-   * creates a connection that we accepted (it was initiated by an explicit connect being done on
-   * the other side). We will only receive data on this socket; never send.
-   */
-  public Connection createReceiver(ConnectionTable table, Socket socket)
-      throws IOException, ConnectionException {
-    Connection connection = new Connection(table, socket);
-    connection.initReceiver();
-    return connection;
+  public OioMsgReader(Connection conn, Version version) {
+    super(conn, version);
   }
+
+  @Override
+  public ByteBuffer readAtLeast(int bytes) throws IOException {
+    byte[] buffer = new byte[bytes];
+    conn.readFully(conn.getSocket().getInputStream(), buffer, bytes);
+    return ByteBuffer.wrap(buffer);
+  }
+
 }
diff --git a/geode-core/src/main/java/org/apache/geode/internal/tcp/PeerConnectionFactory.java b/geode-core/src/main/java/org/apache/geode/internal/tcp/PeerConnectionFactory.java
index 1388061..148c27a 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/tcp/PeerConnectionFactory.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/tcp/PeerConnectionFactory.java
@@ -18,7 +18,6 @@ package org.apache.geode.internal.tcp;
 import java.io.IOException;
 import java.net.Socket;
 
-
 public class PeerConnectionFactory {
   /**
    * creates a connection that we accepted (it was initiated by an explicit connect being done on
diff --git a/geode-core/src/main/java/org/apache/geode/internal/tcp/TCPConduit.java b/geode-core/src/main/java/org/apache/geode/internal/tcp/TCPConduit.java
index 97d748f..c6b8bf9 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/tcp/TCPConduit.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/tcp/TCPConduit.java
@@ -15,6 +15,7 @@
 package org.apache.geode.internal.tcp;
 
 import java.io.IOException;
+import java.net.Inet6Address;
 import java.net.InetAddress;
 import java.net.InetSocketAddress;
 import java.net.ServerSocket;
@@ -26,6 +27,11 @@ import java.nio.channels.ServerSocketChannel;
 import java.nio.channels.SocketChannel;
 import java.util.Map;
 import java.util.Properties;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.RejectedExecutionException;
+import java.util.concurrent.ThreadPoolExecutor;
+
+import javax.net.ssl.SSLException;
 
 import org.apache.logging.log4j.Logger;
 
@@ -45,6 +51,7 @@ import org.apache.geode.distributed.internal.membership.InternalDistributedMembe
 import org.apache.geode.distributed.internal.membership.MembershipManager;
 import org.apache.geode.internal.alerting.AlertingAction;
 import org.apache.geode.internal.logging.LogService;
+import org.apache.geode.internal.logging.LoggingExecutors;
 import org.apache.geode.internal.logging.LoggingThread;
 import org.apache.geode.internal.logging.log4j.LogMarker;
 import org.apache.geode.internal.net.SocketCreator;
@@ -103,7 +110,19 @@ public class TCPConduit implements Runnable {
   /**
    * use javax.net.ssl.SSLServerSocketFactory?
    */
-  boolean useSSL;
+  static boolean useSSL;
+
+  /**
+   * Force use of Sockets rather than SocketChannels (NIO). Note from Bruce: due to a bug in the
+   * java VM, NIO cannot be used with IPv6 addresses on Windows. When that condition holds, the
+   * useNIO flag must be disregarded.
+   */
+  private static boolean USE_NIO;
+
+  /**
+   * use direct ByteBuffers instead of heap ByteBuffers for NIO operations
+   */
+  static boolean useDirectBuffers;
 
   /**
    * The socket producer used by the cluster
@@ -113,6 +132,11 @@ public class TCPConduit implements Runnable {
 
   private MembershipManager membershipManager;
 
+  /**
+   * true if NIO can be used for the server socket
+   */
+  private boolean useNIO;
+
   static {
     init();
   }
@@ -126,13 +150,14 @@ public class TCPConduit implements Runnable {
   }
 
   public static void init() {
+    useSSL = Boolean.getBoolean("p2p.useSSL");
+    // only use nio if not SSL
+    USE_NIO = !useSSL && !Boolean.getBoolean("p2p.oldIO");
     // only use direct buffers if we are using nio
-    LISTENER_CLOSE_TIMEOUT = Integer.getInteger("p2p.listenerCloseTimeout", 60000);
+    useDirectBuffers = USE_NIO && !Boolean.getBoolean("p2p.nodirectBuffers");
+    LISTENER_CLOSE_TIMEOUT = Integer.getInteger("p2p.listenerCloseTimeout", 60000).intValue();
     // note: bug 37730 concerned this defaulting to 50
-    BACKLOG = Integer.getInteger("p2p.backlog", 1280);
-    if (Boolean.getBoolean("p2p.oldIO")) {
-      logger.warn("detected use of p2p.oldIO setting - this is no longer supported");
-    }
+    BACKLOG = Integer.getInteger("p2p.backlog", 1280).intValue();
   }
 
   ///////////////// permanent conduit state
@@ -140,8 +165,8 @@ public class TCPConduit implements Runnable {
   /**
    * the size of OS TCP/IP buffers, not set by default
    */
-  int tcpBufferSize = DistributionConfig.DEFAULT_SOCKET_BUFFER_SIZE;
-  int idleConnectionTimeout = DistributionConfig.DEFAULT_SOCKET_LEASE_TIME;
+  public int tcpBufferSize = DistributionConfig.DEFAULT_SOCKET_BUFFER_SIZE;
+  public int idleConnectionTimeout = DistributionConfig.DEFAULT_SOCKET_LEASE_TIME;
 
   /**
    * port is the tcp/ip port that this conduit binds to. If it is zero, a port from
@@ -255,14 +280,25 @@ public class TCPConduit implements Runnable {
 
     this.socketCreator =
         SocketCreatorFactory.getSocketCreatorForComponent(SecurableCommunicationChannel.CLUSTER);
-    this.useSSL = socketCreator.useSSL();
 
-    InetAddress addr = address;
-    if (addr == null) {
-      try {
-        addr = SocketCreator.getLocalHost();
-      } catch (java.net.UnknownHostException e) {
-        throw new ConnectionException("Unable to resolve localHost address", e);
+    this.useNIO = USE_NIO;
+    if (this.useNIO) {
+      InetAddress addr = address;
+      if (addr == null) {
+        try {
+          addr = SocketCreator.getLocalHost();
+        } catch (java.net.UnknownHostException e) {
+          throw new ConnectionException("Unable to resolve localHost address", e);
+        }
+      }
+      // JDK bug 6230761 - NIO can't be used with IPv6 on Windows
+      if (addr instanceof Inet6Address) {
+        String os = System.getProperty("os.name");
+        if (os != null) {
+          if (os.indexOf("Windows") != -1) {
+            this.useNIO = false;
+          }
+        }
       }
     }
 
@@ -312,20 +348,53 @@ public class TCPConduit implements Runnable {
     }
   }
 
+  private ExecutorService hsPool;
+
   /**
    * the reason for a shutdown, if abnormal
    */
   private volatile Exception shutdownCause;
 
+  private static final int HANDSHAKE_POOL_SIZE =
+      Integer.getInteger("p2p.HANDSHAKE_POOL_SIZE", 10).intValue();
+  private static final long HANDSHAKE_POOL_KEEP_ALIVE_TIME =
+      Long.getLong("p2p.HANDSHAKE_POOL_KEEP_ALIVE_TIME", 60).longValue();
+
+  /**
+   * added to fix bug 40436
+   */
+  public void setMaximumHandshakePoolSize(int maxSize) {
+    if (this.hsPool != null) {
+      ThreadPoolExecutor handshakePool = (ThreadPoolExecutor) this.hsPool;
+      if (maxSize > handshakePool.getMaximumPoolSize()) {
+        handshakePool.setMaximumPoolSize(maxSize);
+      }
+    }
+  }
+
   /**
    * binds the server socket and gets threads going
    */
   private void startAcceptor() throws ConnectionException {
     int localPort;
     int p = this.port;
+    InetAddress ba = this.address;
 
+    {
+      ExecutorService tmp_hsPool = null;
+      String threadName = "P2P-Handshaker " + ba + ":" + p + " Thread ";
+      try {
+        tmp_hsPool =
+            LoggingExecutors.newThreadPoolWithSynchronousFeedThatHandlesRejection(threadName, null,
+                null, 1, HANDSHAKE_POOL_SIZE, HANDSHAKE_POOL_KEEP_ALIVE_TIME);
+      } catch (IllegalArgumentException poolInitException) {
+        throw new ConnectionException(
+            "while creating handshake pool",
+            poolInitException);
+      }
+      this.hsPool = tmp_hsPool;
+    }
     createServerSocket();
-
     try {
       localPort = socket.getLocalPort();
 
@@ -343,6 +412,7 @@ public class TCPConduit implements Runnable {
         logger.fatal(
             "p2p.test.inhibitAcceptor was found to be set, inhibiting incoming tcp/ip connections");
         socket.close();
+        this.hsPool.shutdownNow();
       }
     } catch (IOException io) {
       String s = "While creating ServerSocket on port " + p;
@@ -361,40 +431,66 @@ public class TCPConduit implements Runnable {
     InetAddress bindAddress = this.address;
 
     try {
-      if (serverPort <= 0) {
+      if (this.useNIO) {
+        if (serverPort <= 0) {
 
-        socket = socketCreator.createServerSocketUsingPortRange(bindAddress,
-            connectionRequestBacklog, isBindAddress,
-            true, 0, tcpPortRange);
-      } else {
-        ServerSocketChannel channel = ServerSocketChannel.open();
-        socket = channel.socket();
+          socket = socketCreator.createServerSocketUsingPortRange(bindAddress,
+              connectionRequestBacklog, isBindAddress,
+              this.useNIO, 0, tcpPortRange);
+        } else {
+          ServerSocketChannel channel = ServerSocketChannel.open();
+          socket = channel.socket();
 
-        InetSocketAddress inetSocketAddress =
-            new InetSocketAddress(isBindAddress ? bindAddress : null, serverPort);
-        socket.bind(inetSocketAddress, connectionRequestBacklog);
-      }
+          InetSocketAddress inetSocketAddress =
+              new InetSocketAddress(isBindAddress ? bindAddress : null, serverPort);
+          socket.bind(inetSocketAddress, connectionRequestBacklog);
+        }
+
+        if (useNIO) {
+          try {
+            // set these buffers early so that large buffers will be allocated
+            // on accepted sockets (see java.net.ServerSocket.setReceiverBufferSize javadocs)
+            socket.setReceiveBufferSize(tcpBufferSize);
+            int newSize = socket.getReceiveBufferSize();
+            if (newSize != tcpBufferSize) {
+              logger.info("{} is {} instead of the requested {}",
+                  "Listener receiverBufferSize", Integer.valueOf(newSize),
+                  Integer.valueOf(tcpBufferSize));
+            }
+          } catch (SocketException ex) {
+            logger.warn("Failed to set listener receiverBufferSize to {}",
+                tcpBufferSize);
+          }
+        }
+        channel = socket.getChannel();
+      } else {
+        try {
+          if (serverPort <= 0) {
+            socket = socketCreator.createServerSocketUsingPortRange(bindAddress,
+                connectionRequestBacklog, isBindAddress,
+                this.useNIO, this.tcpBufferSize, tcpPortRange);
+          } else {
+            socket = socketCreator.createServerSocket(serverPort, connectionRequestBacklog,
+                isBindAddress ? bindAddress : null,
+                this.tcpBufferSize);
+          }
+          int newSize = socket.getReceiveBufferSize();
+          if (newSize != this.tcpBufferSize) {
+            logger.info("Listener receiverBufferSize is {} instead of the requested {}",
+                Integer.valueOf(newSize),
+                Integer.valueOf(this.tcpBufferSize));
+          }
+        } catch (SocketException ex) {
+          logger.warn("Failed to set listener receiverBufferSize to {}",
+              this.tcpBufferSize);
 
-      try {
-        // set these buffers early so that large buffers will be allocated
-        // on accepted sockets (see java.net.ServerSocket.setReceiverBufferSize javadocs)
-        socket.setReceiveBufferSize(tcpBufferSize);
-        int newSize = socket.getReceiveBufferSize();
-        if (newSize != tcpBufferSize) {
-          logger.info("{} is {} instead of the requested {}",
-              "Listener receiverBufferSize", newSize,
-              tcpBufferSize);
         }
-      } catch (SocketException ex) {
-        logger.warn("Failed to set listener receiverBufferSize to {}",
-            tcpBufferSize);
       }
-      channel = socket.getChannel();
       port = socket.getLocalPort();
     } catch (IOException io) {
       throw new ConnectionException(
           String.format("While creating ServerSocket on port %s with address %s",
-              serverPort, bindAddress),
+              new Object[] {Integer.valueOf(serverPort), bindAddress}),
           io);
     }
   }
@@ -435,6 +531,7 @@ public class TCPConduit implements Runnable {
       // ignore, please!
     }
 
+    // this.hsPool.shutdownNow(); // I don't trust this not to allocate objects or to synchronize
     // this.conTable.close(); not safe against deadlocks
     ConnectionTable.emergencyClose();
 
@@ -456,7 +553,7 @@ public class TCPConduit implements Runnable {
         // set timeout endpoint here since interrupt() has been known
         // to hang
         long timeout = System.currentTimeMillis() + LISTENER_CLOSE_TIMEOUT;
-        Thread t = this.thread;
+        Thread t = this.thread;;
         if (channel != null) {
           channel.close();
           // NOTE: do not try to interrupt the listener thread at this point.
@@ -482,10 +579,12 @@ public class TCPConduit implements Runnable {
         if (t != null && t.isAlive()) {
           logger.warn(
               "Unable to shut down listener within {}ms.  Unable to interrupt socket.accept() due to JDK bug. Giving up.",
-              LISTENER_CLOSE_TIMEOUT);
+              Integer.valueOf(LISTENER_CLOSE_TIMEOUT));
         }
       } catch (IOException | InterruptedException e) {
         // we're already trying to shutdown, ignore
+      } finally {
+        this.hsPool.shutdownNow();
       }
 
       // close connections after shutting down acceptor to fix bug 30695
@@ -557,8 +656,21 @@ public class TCPConduit implements Runnable {
 
       Socket othersock = null;
       try {
-        SocketChannel otherChannel = channel.accept();
-        othersock = otherChannel.socket();
+        if (this.useNIO) {
+          SocketChannel otherChannel = channel.accept();
+          othersock = otherChannel.socket();
+        } else {
+          try {
+            othersock = socket.accept();
+          } catch (SSLException ex) {
+            // SW: This is the case when there is a problem in P2P
+            // SSL configuration, so need to exit otherwise goes into an
+            // infinite loop just filling the logs
+            logger.warn("Stopping P2P listener due to SSL configuration problem.",
+                ex);
+            break;
+          }
+        }
         if (stopped) {
           try {
             if (othersock != null) {
@@ -636,6 +748,22 @@ public class TCPConduit implements Runnable {
     }
   }
 
+  private void acceptConnection(final Socket othersock) {
+    try {
+      this.hsPool.execute(new Runnable() {
+        @Override
+        public void run() {
+          basicAcceptConnection(othersock);
+        }
+      });
+    } catch (RejectedExecutionException rejected) {
+      try {
+        othersock.close();
+      } catch (IOException ignore) {
+      }
+    }
+  }
+
   private ConnectionTable getConTable() {
     ConnectionTable result = this.conTable;
     if (result == null) {
@@ -646,10 +774,17 @@ public class TCPConduit implements Runnable {
     return result;
   }
 
-  private void acceptConnection(Socket othersock) {
+  protected void basicAcceptConnection(Socket othersock) {
     try {
+      othersock.setSoTimeout(0);
+      socketCreator.handshakeIfSocketIsSSL(othersock, idleConnectionTimeout);
       getConTable().acceptConnection(othersock, new PeerConnectionFactory());
-    } catch (IOException | ConnectionException io) {
+    } catch (IOException io) {
+      // exception is logged by the Connection
+      if (!stopped) {
+        this.getStats().incFailedAccept();
+      }
+    } catch (ConnectionException ex) {
       // exception is logged by the Connection
       if (!stopped) {
         this.getStats().incFailedAccept();
@@ -665,6 +800,13 @@ public class TCPConduit implements Runnable {
   }
 
   /**
+   * return true if "new IO" classes are being used for the server socket
+   */
+  protected boolean useNIO() {
+    return this.useNIO;
+  }
+
+  /**
    * records the current outgoing message count on all thread-owned ordered connections
    *
    * @since GemFire 5.1
@@ -971,10 +1113,6 @@ public class TCPConduit implements Runnable {
     return stats;
   }
 
-  public boolean useSSL() {
-    return useSSL;
-  }
-
   protected class Stopper extends CancelCriterion {
 
     @Override
diff --git a/geode-core/src/main/java/org/apache/geode/internal/util/DscodeHelper.java b/geode-core/src/main/java/org/apache/geode/internal/util/DscodeHelper.java
index 4b48683..3775d61 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/util/DscodeHelper.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/util/DscodeHelper.java
@@ -32,11 +32,7 @@ public class DscodeHelper {
 
   public static DSCODE toDSCODE(final byte value) throws IOException {
     try {
-      DSCODE result = dscodes[value];
-      if (result == null) {
-        throw new IOException("Unknown header byte " + value);
-      }
-      return result;
+      return dscodes[value];
     } catch (ArrayIndexOutOfBoundsException e) {
       throw new IOException("Unknown header byte: " + value);
     }
diff --git a/geode-core/src/main/java/org/apache/geode/management/internal/FederatingManager.java b/geode-core/src/main/java/org/apache/geode/management/internal/FederatingManager.java
index 0fdc80d..d4b3135 100755
--- a/geode-core/src/main/java/org/apache/geode/management/internal/FederatingManager.java
+++ b/geode-core/src/main/java/org/apache/geode/management/internal/FederatingManager.java
@@ -108,7 +108,7 @@ public class FederatingManager extends Manager {
         logger.debug("Starting the Federating Manager.... ");
       }
 
-      pooledMembershipExecutor = LoggingExecutors.newFixedThreadPool("FederatingManager", true,
+      pooledMembershipExecutor = LoggingExecutors.newFixedThreadPool("FederatingManager", false,
           Runtime.getRuntime().availableProcessors());
 
       running = true;
diff --git a/geode-core/src/main/resources/org/apache/geode/internal/sanctioned-geode-core-serializables.txt b/geode-core/src/main/resources/org/apache/geode/internal/sanctioned-geode-core-serializables.txt
index a8a2682..586114e 100644
--- a/geode-core/src/main/resources/org/apache/geode/internal/sanctioned-geode-core-serializables.txt
+++ b/geode-core/src/main/resources/org/apache/geode/internal/sanctioned-geode-core-serializables.txt
@@ -441,7 +441,6 @@ org/apache/geode/internal/memcached/ResponseStatus$5,false
 org/apache/geode/internal/memcached/ResponseStatus$6,false
 org/apache/geode/internal/memcached/commands/ClientError,true,-2426928000696680541
 org/apache/geode/internal/monitoring/ThreadsMonitoring$Mode,false
-org/apache/geode/internal/net/Buffers$BufferType,false
 org/apache/geode/internal/offheap/MemoryBlock$State,false
 org/apache/geode/internal/offheap/OffHeapStorage$1,false
 org/apache/geode/internal/offheap/OffHeapStorage$2,false
diff --git a/geode-core/src/test/java/org/apache/geode/internal/net/BuffersTest.java b/geode-core/src/test/java/org/apache/geode/internal/net/BuffersTest.java
deleted file mode 100644
index 96a4ac6..0000000
--- a/geode-core/src/test/java/org/apache/geode/internal/net/BuffersTest.java
+++ /dev/null
@@ -1,108 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
- * agreements. See the NOTICE file distributed with this work for additional information regarding
- * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance with the License. You may obtain a
- * copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software distributed under the License
- * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
- * or implied. See the License for the specific language governing permissions and limitations under
- * the License.
- */
-
-package org.apache.geode.internal.net;
-
-import static org.assertj.core.api.Assertions.assertThat;
-import static org.junit.Assert.assertEquals;
-import static org.mockito.Mockito.mock;
-
-import java.nio.ByteBuffer;
-
-import org.junit.Test;
-
-import org.apache.geode.distributed.internal.DMStats;
-
-public class BuffersTest {
-
-  @Test
-  public void expandBuffer() throws Exception {
-    ByteBuffer buffer = ByteBuffer.allocate(256);
-    buffer.clear();
-    for (int i = 0; i < 256; i++) {
-      byte b = (byte) (i & 0xff);
-      buffer.put(b);
-    }
-    createAndVerifyNewWriteBuffer(buffer, false);
-
-    createAndVerifyNewWriteBuffer(buffer, true);
-
-
-    createAndVerifyNewReadBuffer(buffer, false);
-
-    createAndVerifyNewReadBuffer(buffer, true);
-
-
-  }
-
-  private void createAndVerifyNewWriteBuffer(ByteBuffer buffer, boolean useDirectBuffer) {
-    buffer.position(buffer.capacity());
-    ByteBuffer newBuffer =
-        Buffers.expandWriteBufferIfNeeded(Buffers.BufferType.UNTRACKED, buffer, 500,
-            mock(DMStats.class));
-    assertEquals(buffer.position(), newBuffer.position());
-    assertEquals(500, newBuffer.capacity());
-    newBuffer.flip();
-    for (int i = 0; i < 256; i++) {
-      byte expected = (byte) (i & 0xff);
-      byte actual = (byte) (newBuffer.get() & 0xff);
-      assertEquals(expected, actual);
-    }
-  }
-
-  private void createAndVerifyNewReadBuffer(ByteBuffer buffer, boolean useDirectBuffer) {
-    buffer.position(0);
-    buffer.limit(256);
-    ByteBuffer newBuffer =
-        Buffers.expandReadBufferIfNeeded(Buffers.BufferType.UNTRACKED, buffer, 500,
-            mock(DMStats.class));
-    assertEquals(0, newBuffer.position());
-    assertEquals(500, newBuffer.capacity());
-    for (int i = 0; i < 256; i++) {
-      byte expected = (byte) (i & 0xff);
-      byte actual = (byte) (newBuffer.get() & 0xff);
-      assertEquals(expected, actual);
-    }
-  }
-
-
-  // the fixed numbers in this test came from a distributed unit test failure
-  @Test
-  public void bufferPositionAndLimitForReadAreCorrectAfterExpansion() throws Exception {
-    ByteBuffer buffer = ByteBuffer.allocate(33842);
-    buffer.position(7);
-    buffer.limit(16384);
-    ByteBuffer newBuffer = Buffers.expandReadBufferIfNeeded(Buffers.BufferType.UNTRACKED, buffer,
-        40899, mock(DMStats.class));
-    assertThat(newBuffer.capacity()).isGreaterThanOrEqualTo(40899);
-    // buffer should be ready to read the same amount of data
-    assertThat(newBuffer.position()).isEqualTo(0);
-    assertThat(newBuffer.limit()).isEqualTo(16384 - 7);
-  }
-
-
-  @Test
-  public void bufferPositionAndLimitForWriteAreCorrectAfterExpansion() throws Exception {
-    ByteBuffer buffer = ByteBuffer.allocate(33842);
-    buffer.position(16384);
-    buffer.limit(buffer.capacity());
-    ByteBuffer newBuffer = Buffers.expandWriteBufferIfNeeded(Buffers.BufferType.UNTRACKED, buffer,
-        40899, mock(DMStats.class));
-    assertThat(newBuffer.capacity()).isGreaterThanOrEqualTo(40899);
-    // buffer should have the same amount of data as the old one
-    assertThat(newBuffer.position()).isEqualTo(16384);
-    assertThat(newBuffer.limit()).isEqualTo(newBuffer.capacity());
-  }
-}
diff --git a/geode-core/src/test/java/org/apache/geode/internal/net/NioPlainEngineTest.java b/geode-core/src/test/java/org/apache/geode/internal/net/NioPlainEngineTest.java
deleted file mode 100644
index 3406717..0000000
--- a/geode-core/src/test/java/org/apache/geode/internal/net/NioPlainEngineTest.java
+++ /dev/null
@@ -1,156 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
- * agreements. See the NOTICE file distributed with this work for additional information regarding
- * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance with the License. You may obtain a
- * copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software distributed under the License
- * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
- * or implied. See the License for the specific language governing permissions and limitations under
- * the License.
- */
-package org.apache.geode.internal.net;
-
-import static org.assertj.core.api.Assertions.assertThat;
-import static org.mockito.ArgumentMatchers.any;
-import static org.mockito.ArgumentMatchers.isA;
-import static org.mockito.Mockito.mock;
-import static org.mockito.Mockito.times;
-import static org.mockito.Mockito.verify;
-import static org.mockito.Mockito.when;
-
-import java.io.EOFException;
-import java.nio.ByteBuffer;
-import java.nio.channels.SocketChannel;
-
-import org.junit.Before;
-import org.junit.Test;
-import org.mockito.invocation.InvocationOnMock;
-import org.mockito.stubbing.Answer;
-
-import org.apache.geode.distributed.internal.DMStats;
-
-public class NioPlainEngineTest {
-
-  private static final int netBufferSize = 10000;
-  private static final int appBufferSize = 20000;
-
-  private DMStats mockStats;
-  private NioPlainEngine nioEngine;
-
-  @Before
-  public void setUp() throws Exception {
-    mockStats = mock(DMStats.class);
-
-    nioEngine = new NioPlainEngine();
-  }
-
-  @Test
-  public void unwrap() {
-    ByteBuffer buffer = ByteBuffer.allocate(100);
-    buffer.position(0).limit(buffer.capacity());
-    nioEngine.unwrap(buffer);
-    assertThat(buffer.position()).isEqualTo(buffer.limit());
-  }
-
-  @Test
-  public void ensureWrappedCapacity() {
-    ByteBuffer wrappedBuffer = Buffers.acquireReceiveBuffer(100, mockStats);
-    verify(mockStats, times(1)).incReceiverBufferSize(any(Integer.class), any(Boolean.class));
-    wrappedBuffer.put(new byte[] {0, 1, 2, 3, 4, 5, 6, 7, 8, 9});
-    nioEngine.lastReadPosition = 10;
-    int requestedCapacity = 210;
-    ByteBuffer result = nioEngine.ensureWrappedCapacity(requestedCapacity, wrappedBuffer,
-        Buffers.BufferType.TRACKED_RECEIVER, mockStats);
-    verify(mockStats, times(2)).incReceiverBufferSize(any(Integer.class), any(Boolean.class));
-    assertThat(result.capacity()).isGreaterThanOrEqualTo(requestedCapacity);
-    assertThat(result).isNotSameAs(wrappedBuffer);
-    // make sure that data was transferred to the new buffer
-    for (int i = 0; i < 10; i++) {
-      assertThat(result.get(i)).isEqualTo(wrappedBuffer.get(i));
-    }
-  }
-
-  @Test
-  public void ensureWrappedCapacityWithEnoughExistingCapacityAndConsumedDataPresent() {
-    int requestedCapacity = 210;
-    final int consumedDataPresentInBuffer = 100;
-    final int unconsumedDataPresentInBuffer = 10;
-    // the buffer will have enough capacity but will need to be compacted
-    ByteBuffer wrappedBuffer =
-        ByteBuffer.allocate(requestedCapacity + unconsumedDataPresentInBuffer);
-    wrappedBuffer.put(new byte[] {0, 1, 2, 3, 4, 5, 6, 7, 8, 9});
-    nioEngine.lastProcessedPosition = consumedDataPresentInBuffer;
-    // previous read left 10 bytes
-    nioEngine.lastReadPosition = consumedDataPresentInBuffer + unconsumedDataPresentInBuffer;
-    ByteBuffer result =
-        wrappedBuffer = nioEngine.ensureWrappedCapacity(requestedCapacity, wrappedBuffer,
-            Buffers.BufferType.UNTRACKED, mockStats);
-    assertThat(result.capacity()).isEqualTo(requestedCapacity + unconsumedDataPresentInBuffer);
-    assertThat(result).isSameAs(wrappedBuffer);
-    // make sure that data was transferred to the new buffer
-    for (int i = 0; i < 10; i++) {
-      assertThat(result.get(i)).isEqualTo(wrappedBuffer.get(i));
-    }
-    assertThat(nioEngine.lastProcessedPosition).isEqualTo(0);
-    assertThat(nioEngine.lastReadPosition).isEqualTo(10);
-  }
-
-  @Test
-  public void readAtLeast() throws Exception {
-    final int amountToRead = 150;
-    final int individualRead = 60;
-    final int preexistingBytes = 10;
-    ByteBuffer wrappedBuffer = ByteBuffer.allocate(1000);
-    SocketChannel mockChannel = mock(SocketChannel.class);
-
-    // simulate some socket reads
-    when(mockChannel.read(any(ByteBuffer.class))).thenAnswer(new Answer<Integer>() {
-      @Override
-      public Integer answer(InvocationOnMock invocation) throws Throwable {
-        ByteBuffer buffer = invocation.getArgument(0);
-        buffer.position(buffer.position() + individualRead);
-        return individualRead;
-      }
-    });
-
-    nioEngine.lastReadPosition = 10;
-
-    ByteBuffer data = nioEngine.readAtLeast(mockChannel, amountToRead, wrappedBuffer, mockStats);
-    verify(mockChannel, times(3)).read(isA(ByteBuffer.class));
-    assertThat(data.position()).isEqualTo(0);
-    assertThat(data.limit()).isEqualTo(amountToRead);
-    assertThat(nioEngine.lastReadPosition).isEqualTo(individualRead * 3 + preexistingBytes);
-    assertThat(nioEngine.lastProcessedPosition).isEqualTo(amountToRead);
-
-    data = nioEngine.readAtLeast(mockChannel, amountToRead, wrappedBuffer, mockStats);
-    verify(mockChannel, times(5)).read(any(ByteBuffer.class));
-    // at end of last readAtLeast data
-    assertThat(data.position()).isEqualTo(amountToRead);
-    // we read amountToRead bytes
-    assertThat(data.limit()).isEqualTo(amountToRead * 2);
-    // we did 2 more reads from the network
-    assertThat(nioEngine.lastReadPosition).isEqualTo(individualRead * 5 + preexistingBytes);
-    // the next read will start at the end of consumed data
-    assertThat(nioEngine.lastProcessedPosition).isEqualTo(amountToRead * 2);
-
-  }
-
-  @Test(expected = EOFException.class)
-  public void readAtLeastThrowsEOFException() throws Exception {
-    final int amountToRead = 150;
-    ByteBuffer wrappedBuffer = ByteBuffer.allocate(1000);
-    SocketChannel mockChannel = mock(SocketChannel.class);
-
-    // simulate some socket reads
-    when(mockChannel.read(any(ByteBuffer.class))).thenReturn(-1);
-
-    nioEngine.lastReadPosition = 10;
-
-    nioEngine.readAtLeast(mockChannel, amountToRead, wrappedBuffer, mockStats);
-  }
-
-}
diff --git a/geode-core/src/test/java/org/apache/geode/internal/net/NioSslEngineTest.java b/geode-core/src/test/java/org/apache/geode/internal/net/NioSslEngineTest.java
deleted file mode 100644
index b12df09..0000000
--- a/geode-core/src/test/java/org/apache/geode/internal/net/NioSslEngineTest.java
+++ /dev/null
@@ -1,605 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
- * agreements. See the NOTICE file distributed with this work for additional information regarding
- * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance with the License. You may obtain a
- * copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software distributed under the License
- * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
- * or implied. See the License for the specific language governing permissions and limitations under
- * the License.
- */
-package org.apache.geode.internal.net;
-
-import static javax.net.ssl.SSLEngineResult.HandshakeStatus.FINISHED;
-import static javax.net.ssl.SSLEngineResult.HandshakeStatus.NEED_TASK;
-import static javax.net.ssl.SSLEngineResult.HandshakeStatus.NEED_UNWRAP;
-import static javax.net.ssl.SSLEngineResult.HandshakeStatus.NEED_WRAP;
-import static javax.net.ssl.SSLEngineResult.HandshakeStatus.NOT_HANDSHAKING;
-import static javax.net.ssl.SSLEngineResult.Status.BUFFER_OVERFLOW;
-import static javax.net.ssl.SSLEngineResult.Status.BUFFER_UNDERFLOW;
-import static javax.net.ssl.SSLEngineResult.Status.CLOSED;
-import static javax.net.ssl.SSLEngineResult.Status.OK;
-import static org.assertj.core.api.Assertions.assertThat;
-import static org.assertj.core.api.Assertions.assertThatThrownBy;
-import static org.mockito.ArgumentMatchers.any;
-import static org.mockito.ArgumentMatchers.isA;
-import static org.mockito.Mockito.atLeast;
-import static org.mockito.Mockito.mock;
-import static org.mockito.Mockito.spy;
-import static org.mockito.Mockito.times;
-import static org.mockito.Mockito.verify;
-import static org.mockito.Mockito.when;
-
-import java.net.Socket;
-import java.net.SocketException;
-import java.nio.ByteBuffer;
-import java.nio.channels.ClosedChannelException;
-import java.nio.channels.SocketChannel;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.List;
-
-import javax.net.ssl.SSLEngine;
-import javax.net.ssl.SSLEngineResult;
-import javax.net.ssl.SSLException;
-import javax.net.ssl.SSLHandshakeException;
-import javax.net.ssl.SSLSession;
-
-import org.junit.Before;
-import org.junit.Test;
-import org.junit.experimental.categories.Category;
-import org.mockito.invocation.InvocationOnMock;
-import org.mockito.stubbing.Answer;
-
-import org.apache.geode.GemFireIOException;
-import org.apache.geode.distributed.internal.DMStats;
-import org.apache.geode.test.junit.categories.MembershipTest;
-
-@Category({MembershipTest.class})
-public class NioSslEngineTest {
-  private static final int netBufferSize = 10000;
-  private static final int appBufferSize = 20000;
-
-  private SSLEngine mockEngine;
-  private DMStats mockStats;
-  private NioSslEngine nioSslEngine;
-  private NioSslEngine spyNioSslEngine;
-
-  @Before
-  public void setUp() throws Exception {
-    mockEngine = mock(SSLEngine.class);
-
-    SSLSession mockSession = mock(SSLSession.class);
-    when(mockEngine.getSession()).thenReturn(mockSession);
-    when(mockSession.getPacketBufferSize()).thenReturn(netBufferSize);
-    when(mockSession.getApplicationBufferSize()).thenReturn(appBufferSize);
-
-    mockStats = mock(DMStats.class);
-
-    nioSslEngine = new NioSslEngine(mockEngine, mockStats);
-    spyNioSslEngine = spy(nioSslEngine);
-  }
-
-  @Test
-  public void handshake() throws Exception {
-    SocketChannel mockChannel = mock(SocketChannel.class);
-    when(mockChannel.read(any(ByteBuffer.class))).thenReturn(100, 100, 100, 0);
-    Socket mockSocket = mock(Socket.class);
-    when(mockChannel.socket()).thenReturn(mockSocket);
-    when(mockSocket.isClosed()).thenReturn(false);
-
-    // initial read of handshake status followed by read of handshake status after task execution
-    when(mockEngine.getHandshakeStatus()).thenReturn(NEED_UNWRAP, NEED_WRAP);
-
-    // interleaved wraps/unwraps/task-execution
-    when(mockEngine.unwrap(any(ByteBuffer.class), any(ByteBuffer.class))).thenReturn(
-        new SSLEngineResult(OK, NEED_WRAP, 100, 100),
-        new SSLEngineResult(BUFFER_OVERFLOW, NEED_UNWRAP, 0, 0),
-        new SSLEngineResult(OK, NEED_TASK, 100, 0));
-
-    when(mockEngine.getDelegatedTask()).thenReturn(() -> {
-    }, (Runnable) null);
-
-    when(mockEngine.wrap(any(ByteBuffer.class), any(ByteBuffer.class))).thenReturn(
-        new SSLEngineResult(OK, NEED_UNWRAP, 100, 100),
-        new SSLEngineResult(BUFFER_OVERFLOW, NEED_WRAP, 0, 0),
-        new SSLEngineResult(CLOSED, FINISHED, 100, 0));
-
-    spyNioSslEngine.handshake(mockChannel, 10000, ByteBuffer.allocate(netBufferSize / 2));
-    verify(mockEngine, atLeast(2)).getHandshakeStatus();
-    verify(mockEngine, times(3)).wrap(any(ByteBuffer.class), any(ByteBuffer.class));
-    verify(mockEngine, times(3)).unwrap(any(ByteBuffer.class), any(ByteBuffer.class));
-    verify(spyNioSslEngine, times(2)).expandWriteBuffer(any(Buffers.BufferType.class),
-        any(ByteBuffer.class), any(Integer.class), any(DMStats.class));
-    verify(spyNioSslEngine, times(1)).handleBlockingTasks();
-    verify(mockChannel, times(3)).read(any(ByteBuffer.class));
-  }
-
-  @Test
-  public void handshakeUsesBufferParameter() throws Exception {
-    SocketChannel mockChannel = mock(SocketChannel.class);
-    when(mockChannel.read(any(ByteBuffer.class))).thenReturn(100, 100, 100, 0);
-    Socket mockSocket = mock(Socket.class);
-    when(mockChannel.socket()).thenReturn(mockSocket);
-    when(mockSocket.isClosed()).thenReturn(false);
-
-    // initial read of handshake status followed by read of handshake status after task execution
-    when(mockEngine.getHandshakeStatus()).thenReturn(NEED_UNWRAP, NEED_WRAP);
-
-    // interleaved wraps/unwraps/task-execution
-    when(mockEngine.unwrap(any(ByteBuffer.class), any(ByteBuffer.class))).thenReturn(
-        new SSLEngineResult(OK, NEED_WRAP, 100, 100),
-        new SSLEngineResult(BUFFER_OVERFLOW, NEED_UNWRAP, 0, 0),
-        new SSLEngineResult(OK, NEED_TASK, 100, 0));
-
-    when(mockEngine.getDelegatedTask()).thenReturn(() -> {
-    }, (Runnable) null);
-
-    when(mockEngine.wrap(any(ByteBuffer.class), any(ByteBuffer.class))).thenReturn(
-        new SSLEngineResult(OK, NEED_UNWRAP, 100, 100),
-        new SSLEngineResult(BUFFER_OVERFLOW, NEED_WRAP, 0, 0),
-        new SSLEngineResult(CLOSED, FINISHED, 100, 0));
-
-    ByteBuffer byteBuffer = ByteBuffer.allocate(netBufferSize);
-
-    spyNioSslEngine.handshake(mockChannel, 10000, byteBuffer);
-
-    assertThat(spyNioSslEngine.handshakeBuffer).isSameAs(byteBuffer);
-  }
-
-
-  @Test
-  public void handshakeDetectsClosedSocket() throws Exception {
-    SocketChannel mockChannel = mock(SocketChannel.class);
-    when(mockChannel.read(any(ByteBuffer.class))).thenReturn(100, 100, 100, 0);
-    Socket mockSocket = mock(Socket.class);
-    when(mockChannel.socket()).thenReturn(mockSocket);
-    when(mockSocket.isClosed()).thenReturn(true);
-
-    // initial read of handshake status followed by read of handshake status after task execution
-    when(mockEngine.getHandshakeStatus()).thenReturn(NEED_UNWRAP);
-
-    ByteBuffer byteBuffer = ByteBuffer.allocate(netBufferSize);
-
-    assertThatThrownBy(() -> spyNioSslEngine.handshake(mockChannel, 10000, byteBuffer))
-        .isInstanceOf(
-            SocketException.class)
-        .hasMessageContaining("handshake terminated");
-  }
-
-  @Test
-  public void handshakeDoesNotTerminateWithFinished() throws Exception {
-    SocketChannel mockChannel = mock(SocketChannel.class);
-    when(mockChannel.read(any(ByteBuffer.class))).thenReturn(100, 100, 100, 0);
-    Socket mockSocket = mock(Socket.class);
-    when(mockChannel.socket()).thenReturn(mockSocket);
-    when(mockSocket.isClosed()).thenReturn(false);
-
-    // initial read of handshake status followed by read of handshake status after task execution
-    when(mockEngine.getHandshakeStatus()).thenReturn(NEED_UNWRAP);
-
-    // interleaved wraps/unwraps/task-execution
-    when(mockEngine.unwrap(any(ByteBuffer.class), any(ByteBuffer.class))).thenReturn(
-        new SSLEngineResult(OK, NEED_WRAP, 100, 100));
-
-    when(mockEngine.wrap(any(ByteBuffer.class), any(ByteBuffer.class))).thenReturn(
-        new SSLEngineResult(CLOSED, NOT_HANDSHAKING, 100, 0));
-
-    ByteBuffer byteBuffer = ByteBuffer.allocate(netBufferSize);
-
-    assertThatThrownBy(() -> spyNioSslEngine.handshake(mockChannel, 10000, byteBuffer))
-        .isInstanceOf(
-            SSLHandshakeException.class)
-        .hasMessageContaining("SSL Handshake terminated with status");
-  }
-
-
-  @Test
-  public void checkClosed() {
-    nioSslEngine.checkClosed();
-  }
-
-  @Test(expected = IllegalStateException.class)
-  public void checkClosedThrows() throws Exception {
-    when(mockEngine.wrap(any(ByteBuffer.class), any(ByteBuffer.class))).thenReturn(
-        new SSLEngineResult(CLOSED, FINISHED, 0, 100));
-    nioSslEngine.close(mock(SocketChannel.class));
-    nioSslEngine.checkClosed();
-  }
-
-  @Test
-  public void wrap() throws Exception {
-    // make the application data too big to fit into the engine's encryption buffer
-    ByteBuffer appData = ByteBuffer.allocate(nioSslEngine.myNetData.capacity() + 100);
-    byte[] appBytes = new byte[appData.capacity()];
-    Arrays.fill(appBytes, (byte) 0x1F);
-    appData.put(appBytes);
-    appData.flip();
-
-    // create an engine that will transfer bytes from the application buffer to the encrypted buffer
-    TestSSLEngine testEngine = new TestSSLEngine();
-    testEngine.addReturnResult(
-        new SSLEngineResult(OK, NEED_TASK, appData.remaining(), appData.remaining()));
-    spyNioSslEngine.engine = testEngine;
-
-    ByteBuffer wrappedBuffer = spyNioSslEngine.wrap(appData);
-
-    verify(spyNioSslEngine, times(1)).expandWriteBuffer(any(Buffers.BufferType.class),
-        any(ByteBuffer.class), any(Integer.class), any(DMStats.class));
-    appData.flip();
-    assertThat(wrappedBuffer).isEqualTo(appData);
-    verify(spyNioSslEngine, times(1)).handleBlockingTasks();
-  }
-
-  @Test
-  public void wrapFails() {
-    // make the application data too big to fit into the engine's encryption buffer
-    ByteBuffer appData = ByteBuffer.allocate(nioSslEngine.myNetData.capacity() + 100);
-    byte[] appBytes = new byte[appData.capacity()];
-    Arrays.fill(appBytes, (byte) 0x1F);
-    appData.put(appBytes);
-    appData.flip();
-
-    // create an engine that will transfer bytes from the application buffer to the encrypted buffer
-    TestSSLEngine testEngine = new TestSSLEngine();
-    testEngine.addReturnResult(
-        new SSLEngineResult(CLOSED, NEED_TASK, appData.remaining(), appData.remaining()));
-    spyNioSslEngine.engine = testEngine;
-
-    assertThatThrownBy(() -> spyNioSslEngine.wrap(appData)).isInstanceOf(SSLException.class)
-        .hasMessageContaining("Error encrypting data");
-  }
-
-  @Test
-  public void unwrapWithBufferOverflow() throws Exception {
-    // make the application data too big to fit into the engine's encryption buffer
-    ByteBuffer wrappedData = ByteBuffer.allocate(nioSslEngine.peerAppData.capacity() + 100);
-    byte[] netBytes = new byte[wrappedData.capacity()];
-    Arrays.fill(netBytes, (byte) 0x1F);
-    wrappedData.put(netBytes);
-    wrappedData.flip();
-
-    // create an engine that will transfer bytes from the application buffer to the encrypted buffer
-    TestSSLEngine testEngine = new TestSSLEngine();
-    spyNioSslEngine.engine = testEngine;
-
-    testEngine.addReturnResult(
-        new SSLEngineResult(BUFFER_OVERFLOW, NEED_UNWRAP, netBytes.length, netBytes.length),
-        new SSLEngineResult(OK, FINISHED, netBytes.length, netBytes.length));
-
-    ByteBuffer unwrappedBuffer = spyNioSslEngine.unwrap(wrappedData);
-    unwrappedBuffer.flip();
-
-    verify(spyNioSslEngine, times(2)).expandPeerAppData(any(ByteBuffer.class));
-    assertThat(unwrappedBuffer).isEqualTo(ByteBuffer.wrap(netBytes));
-  }
-
-  @Test
-  public void unwrapWithBufferUnderflow() throws Exception {
-    ByteBuffer wrappedData = ByteBuffer.allocate(nioSslEngine.peerAppData.capacity());
-    byte[] netBytes = new byte[wrappedData.capacity() / 2];
-    Arrays.fill(netBytes, (byte) 0x1F);
-    wrappedData.put(netBytes);
-    wrappedData.flip();
-
-    // create an engine that will transfer bytes from the application buffer to the encrypted buffer
-    TestSSLEngine testEngine = new TestSSLEngine();
-    testEngine.addReturnResult(new SSLEngineResult(BUFFER_UNDERFLOW, NEED_TASK, 0, 0));
-    spyNioSslEngine.engine = testEngine;
-
-    ByteBuffer unwrappedBuffer = spyNioSslEngine.unwrap(wrappedData);
-    unwrappedBuffer.flip();
-    assertThat(unwrappedBuffer.remaining()).isEqualTo(0);
-    assertThat(wrappedData.position()).isEqualTo(netBytes.length);
-  }
-
-  @Test
-  public void unwrapWithDecryptionError() {
-    // make the application data too big to fit into the engine's encryption buffer
-    ByteBuffer wrappedData = ByteBuffer.allocate(nioSslEngine.peerAppData.capacity());
-    byte[] netBytes = new byte[wrappedData.capacity() / 2];
-    Arrays.fill(netBytes, (byte) 0x1F);
-    wrappedData.put(netBytes);
-    wrappedData.flip();
-
-    // create an engine that will transfer bytes from the application buffer to the encrypted buffer
-    TestSSLEngine testEngine = new TestSSLEngine();
-    testEngine.addReturnResult(new SSLEngineResult(CLOSED, FINISHED, 0, 0));
-    spyNioSslEngine.engine = testEngine;
-
-    assertThatThrownBy(() -> spyNioSslEngine.unwrap(wrappedData)).isInstanceOf(SSLException.class)
-        .hasMessageContaining("Error decrypting data");
-  }
-
-  @Test
-  public void ensureUnwrappedCapacity() {
-    ByteBuffer wrappedBuffer = ByteBuffer.allocate(netBufferSize);
-    int requestedCapacity = nioSslEngine.getUnwrappedBuffer(wrappedBuffer).capacity() * 2;
-    ByteBuffer unwrappedBuffer = nioSslEngine.ensureUnwrappedCapacity(requestedCapacity);
-    assertThat(unwrappedBuffer.capacity()).isGreaterThanOrEqualTo(requestedCapacity);
-  }
-
-  @Test
-  public void close() throws Exception {
-    SocketChannel mockChannel = mock(SocketChannel.class);
-    Socket mockSocket = mock(Socket.class);
-    when(mockChannel.socket()).thenReturn(mockSocket);
-    when(mockSocket.isClosed()).thenReturn(false);
-
-    when(mockEngine.isOutboundDone()).thenReturn(Boolean.FALSE);
-    when(mockEngine.wrap(any(ByteBuffer.class), any(ByteBuffer.class))).thenReturn(
-        new SSLEngineResult(CLOSED, FINISHED, 0, 0));
-    nioSslEngine.close(mockChannel);
-    assertThatThrownBy(() -> nioSslEngine.checkClosed()).isInstanceOf(IllegalStateException.class);
-    nioSslEngine.close(mockChannel);
-  }
-
-  @Test
-  public void closeWhenUnwrapError() throws Exception {
-    SocketChannel mockChannel = mock(SocketChannel.class);
-    Socket mockSocket = mock(Socket.class);
-    when(mockChannel.socket()).thenReturn(mockSocket);
-    when(mockSocket.isClosed()).thenReturn(true);
-
-    when(mockEngine.isOutboundDone()).thenReturn(Boolean.FALSE);
-    when(mockEngine.wrap(any(ByteBuffer.class), any(ByteBuffer.class))).thenReturn(
-        new SSLEngineResult(BUFFER_OVERFLOW, FINISHED, 0, 0));
-    assertThatThrownBy(() -> nioSslEngine.close(mockChannel)).isInstanceOf(GemFireIOException.class)
-        .hasMessageContaining("exception closing SSL session")
-        .hasCauseInstanceOf(SSLException.class);
-  }
-
-  @Test
-  public void closeWhenSocketWriteError() throws Exception {
-    SocketChannel mockChannel = mock(SocketChannel.class);
-    Socket mockSocket = mock(Socket.class);
-    when(mockChannel.socket()).thenReturn(mockSocket);
-    when(mockSocket.isClosed()).thenReturn(true);
-
-    when(mockEngine.isOutboundDone()).thenReturn(Boolean.FALSE);
-    when(mockEngine.wrap(any(ByteBuffer.class), any(ByteBuffer.class))).thenAnswer((x) -> {
-      // give the NioSslEngine something to write on its socket channel, simulating a TLS close
-      // message
-      nioSslEngine.myNetData.put("Goodbye cruel world".getBytes());
-      return new SSLEngineResult(CLOSED, FINISHED, 0, 0);
-    });
-    when(mockChannel.write(any(ByteBuffer.class))).thenThrow(new ClosedChannelException());
-    nioSslEngine.close(mockChannel);
-    verify(mockChannel, times(1)).write(any(ByteBuffer.class));
-  }
-
-  @Test
-  public void ensureWrappedCapacityOfSmallMessage() {
-    ByteBuffer buffer = ByteBuffer.allocate(netBufferSize);
-    assertThat(
-        nioSslEngine.ensureWrappedCapacity(10, buffer, Buffers.BufferType.UNTRACKED, mockStats))
-            .isEqualTo(buffer);
-  }
-
-  @Test
-  public void ensureWrappedCapacityWithNoBuffer() {
-    assertThat(
-        nioSslEngine.ensureWrappedCapacity(10, null, Buffers.BufferType.UNTRACKED, mockStats)
-            .capacity())
-                .isEqualTo(netBufferSize);
-  }
-
-  @Test
-  public void readAtLeast() throws Exception {
-    final int amountToRead = 150;
-    final int individualRead = 60;
-    final int preexistingBytes = 10;
-    ByteBuffer wrappedBuffer = ByteBuffer.allocate(1000);
-    SocketChannel mockChannel = mock(SocketChannel.class);
-
-    // force a compaction by making the decoded buffer appear near to being full
-    ByteBuffer unwrappedBuffer = nioSslEngine.peerAppData;
-    unwrappedBuffer.position(unwrappedBuffer.capacity() - individualRead);
-    unwrappedBuffer.limit(unwrappedBuffer.position() + preexistingBytes);
-
-    // simulate some socket reads
-    when(mockChannel.read(any(ByteBuffer.class))).thenAnswer(new Answer<Integer>() {
-      @Override
-      public Integer answer(InvocationOnMock invocation) throws Throwable {
-        ByteBuffer buffer = invocation.getArgument(0);
-        buffer.position(buffer.position() + individualRead);
-        return individualRead;
-      }
-    });
-
-    TestSSLEngine testSSLEngine = new TestSSLEngine();
-    testSSLEngine.addReturnResult(new SSLEngineResult(OK, NEED_UNWRAP, 0, 0));
-    nioSslEngine.engine = testSSLEngine;
-
-    ByteBuffer data = nioSslEngine.readAtLeast(mockChannel, amountToRead, wrappedBuffer, mockStats);
-    verify(mockChannel, times(3)).read(isA(ByteBuffer.class));
-    assertThat(data.position()).isEqualTo(0);
-    assertThat(data.limit()).isEqualTo(individualRead * 3 + preexistingBytes);
-  }
-
-
-  /**
-   * This tests the case where a message header has been read and part of a message has been
-   * read, but the decoded buffer is too small to hold all of the message. In this case
-   * the readAtLeast method will have to expand the capacity of the decoded buffer and return
-   * the new, expanded, buffer as the method result.
-   */
-  @Test
-  public void readAtLeastUsingSmallAppBuffer() throws Exception {
-    final int amountToRead = 150;
-    final int individualRead = 60;
-    final int preexistingBytes = 10;
-    ByteBuffer wrappedBuffer = ByteBuffer.allocate(1000);
-    SocketChannel mockChannel = mock(SocketChannel.class);
-
-    // force buffer expansion by making a small decoded buffer appear near to being full
-    ByteBuffer unwrappedBuffer = ByteBuffer.allocate(100);
-    unwrappedBuffer.position(7).limit(preexistingBytes + 7); // 7 bytes of message header - ignored
-    nioSslEngine.peerAppData = unwrappedBuffer;
-
-    // simulate some socket reads
-    when(mockChannel.read(any(ByteBuffer.class))).thenAnswer(new Answer<Integer>() {
-      @Override
-      public Integer answer(InvocationOnMock invocation) throws Throwable {
-        ByteBuffer buffer = invocation.getArgument(0);
-        buffer.position(buffer.position() + individualRead);
-        return individualRead;
-      }
-    });
-
-    TestSSLEngine testSSLEngine = new TestSSLEngine();
-    testSSLEngine.addReturnResult(
-        new SSLEngineResult(OK, NEED_UNWRAP, 0, 0), // 10 + 60 bytes = 70
-        new SSLEngineResult(OK, NEED_UNWRAP, 0, 0), // 70 + 60 bytes = 130
-        new SSLEngineResult(BUFFER_OVERFLOW, NEED_UNWRAP, 0, 0), // need 190 bytes capacity
-        new SSLEngineResult(OK, NEED_UNWRAP, 0, 0)); // 130 + 60 bytes = 190
-    nioSslEngine.engine = testSSLEngine;
-
-    ByteBuffer data = nioSslEngine.readAtLeast(mockChannel, amountToRead, wrappedBuffer, mockStats);
-    verify(mockChannel, times(3)).read(isA(ByteBuffer.class));
-    assertThat(data.position()).isEqualTo(0);
-    assertThat(data.limit()).isEqualTo(individualRead * 3 + preexistingBytes);
-  }
-
-
-  // TestSSLEngine holds a stack of SSLEngineResults and always copies the
-  // input buffer to the output buffer byte-for-byte in wrap() and unwrap() operations.
-  // We use it in some tests where we need the byte-copying behavior because it's
-  // pretty difficult & cumbersome to implement with Mockito.
-  static class TestSSLEngine extends SSLEngine {
-
-    private List<SSLEngineResult> returnResults = new ArrayList<>();
-
-    private SSLEngineResult nextResult() {
-      SSLEngineResult result = returnResults.remove(0);
-      if (returnResults.isEmpty()) {
-        returnResults.add(result);
-      }
-      return result;
-    }
-
-    @Override
-    public SSLEngineResult wrap(ByteBuffer[] sources, int i, int i1, ByteBuffer destination) {
-      for (ByteBuffer source : sources) {
-        destination.put(source);
-      }
-      return nextResult();
-    }
-
-    @Override
-    public SSLEngineResult unwrap(ByteBuffer source, ByteBuffer[] destinations, int i, int i1) {
-      SSLEngineResult sslEngineResult = nextResult();
-      if (sslEngineResult.getStatus() != BUFFER_UNDERFLOW
-          && sslEngineResult.getStatus() != BUFFER_OVERFLOW) {
-        destinations[0].put(source);
-      }
-      return sslEngineResult;
-    }
-
-    @Override
-    public Runnable getDelegatedTask() {
-      return null;
-    }
-
-    @Override
-    public void closeInbound() {}
-
-    @Override
-    public boolean isInboundDone() {
-      return false;
-    }
-
-    @Override
-    public void closeOutbound() {}
-
-    @Override
-    public boolean isOutboundDone() {
-      return false;
-    }
-
-    @Override
-    public String[] getSupportedCipherSuites() {
-      return new String[0];
-    }
-
-    @Override
-    public String[] getEnabledCipherSuites() {
-      return new String[0];
-    }
-
-    @Override
-    public void setEnabledCipherSuites(String[] strings) {}
-
-    @Override
-    public String[] getSupportedProtocols() {
-      return new String[0];
-    }
-
-    @Override
-    public String[] getEnabledProtocols() {
-      return new String[0];
-    }
-
-    @Override
-    public void setEnabledProtocols(String[] strings) {}
-
-    @Override
-    public SSLSession getSession() {
-      return null;
-    }
-
-    @Override
-    public void beginHandshake() {}
-
-    @Override
-    public SSLEngineResult.HandshakeStatus getHandshakeStatus() {
-      return null;
-    }
-
-    @Override
-    public void setUseClientMode(boolean b) {}
-
-    @Override
-    public boolean getUseClientMode() {
-      return false;
-    }
-
-    @Override
-    public void setNeedClientAuth(boolean b) {}
-
-    @Override
-    public boolean getNeedClientAuth() {
-      return false;
-    }
-
-    @Override
-    public void setWantClientAuth(boolean b) {}
-
-    @Override
-    public boolean getWantClientAuth() {
-      return false;
-    }
-
-    @Override
-    public void setEnableSessionCreation(boolean b) {}
-
-    @Override
-    public boolean getEnableSessionCreation() {
-      return false;
-    }
-
-    /**
-     * add an engine operation result to be returned by wrap or unwrap.
-     * Like Mockito's thenReturn(), the last return result will repeat forever
-     */
-    void addReturnResult(SSLEngineResult... sslEngineResult) {
-      for (SSLEngineResult result : sslEngineResult) {
-        returnResults.add(result);
-      }
-    }
-  }
-}
diff --git a/geode-core/src/test/java/org/apache/geode/internal/tcp/ConnectionJUnitTest.java b/geode-core/src/test/java/org/apache/geode/internal/tcp/ConnectionJUnitTest.java
index 73cf06c..075b252 100755
--- a/geode-core/src/test/java/org/apache/geode/internal/tcp/ConnectionJUnitTest.java
+++ b/geode-core/src/test/java/org/apache/geode/internal/tcp/ConnectionJUnitTest.java
@@ -20,14 +20,14 @@ import static org.mockito.Mockito.mock;
 import static org.mockito.Mockito.verify;
 import static org.mockito.Mockito.when;
 
+import java.io.InputStream;
 import java.net.InetSocketAddress;
-import java.nio.channels.SocketChannel;
+import java.net.Socket;
 
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
 import org.apache.geode.CancelCriterion;
-import org.apache.geode.distributed.internal.DMStats;
 import org.apache.geode.distributed.internal.DistributionManager;
 import org.apache.geode.distributed.internal.membership.InternalDistributedMember;
 import org.apache.geode.distributed.internal.membership.MembershipManager;
@@ -64,17 +64,23 @@ public class ConnectionJUnitTest {
     when(conduit.getSocketId())
         .thenReturn(new InetSocketAddress(SocketCreator.getLocalHost(), 10337));
 
+    // NIO can't be mocked because SocketChannel has a final method that
+    // is used by Connection - configureBlocking
+    when(conduit.useNIO()).thenReturn(false);
+
     // mock the distribution manager and membership manager
     when(distMgr.getMembershipManager()).thenReturn(membership);
     when(conduit.getDM()).thenReturn(distMgr);
-    when(conduit.getStats()).thenReturn(mock(DMStats.class));
     when(table.getDM()).thenReturn(distMgr);
     SocketCloser closer = mock(SocketCloser.class);
     when(table.getSocketCloser()).thenReturn(closer);
 
-    SocketChannel channel = SocketChannel.open();
+    InputStream instream = mock(InputStream.class);
+    when(instream.read()).thenReturn(-1);
+    Socket socket = mock(Socket.class);
+    when(socket.getInputStream()).thenReturn(instream);
 
-    Connection conn = new Connection(table, channel.socket());
+    Connection conn = new Connection(table, socket);
     conn.setSharedUnorderedForTest();
     conn.run();
     verify(membership).suspectMember(isNull(InternalDistributedMember.class), any(String.class));
diff --git a/geode-core/src/test/java/org/apache/geode/internal/tcp/ConnectionTest.java b/geode-core/src/test/java/org/apache/geode/internal/tcp/ConnectionTest.java
index 77160c8..e7928b9 100644
--- a/geode-core/src/test/java/org/apache/geode/internal/tcp/ConnectionTest.java
+++ b/geode-core/src/test/java/org/apache/geode/internal/tcp/ConnectionTest.java
@@ -38,9 +38,9 @@ public class ConnectionTest {
     boolean forceAsync = true;
     DistributionMessage mockDistributionMessage = mock(DistributionMessage.class);
 
-    mockConnection.writeFully(channel, buffer, forceAsync, mockDistributionMessage);
+    mockConnection.nioWriteFully(channel, buffer, forceAsync, mockDistributionMessage);
 
-    verify(mockConnection, times(1)).writeFully(channel, buffer, forceAsync,
+    verify(mockConnection, times(1)).nioWriteFully(channel, buffer, forceAsync,
         mockDistributionMessage);
   }
 }
diff --git a/geode-core/src/test/resources/org/apache/geode/internal/util/PluckStacksJstackGeneratedDump.txt b/geode-core/src/test/resources/org/apache/geode/internal/util/PluckStacksJstackGeneratedDump.txt
index 7776fcc..d1d7ff8 100644
--- a/geode-core/src/test/resources/org/apache/geode/internal/util/PluckStacksJstackGeneratedDump.txt
+++ b/geode-core/src/test/resources/org/apache/geode/internal/util/PluckStacksJstackGeneratedDump.txt
@@ -16,7 +16,7 @@ Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.144-b01 mixed mode):
 	at sun.nio.ch.IOUtil.read(IOUtil.java:192)
 	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
 	- locked <0x0000000749c85bd8> (a java.lang.Object)
-	at readMessages(Connection.java:1810)
+	at org.apache.geode.internal.tcp.Connection.runNioReader(Connection.java:1810)
 	at org.apache.geode.internal.tcp.Connection.run(Connection.java:1690)
 	at java.lang.Thread.run(Thread.java:748)
 
@@ -31,7 +31,7 @@ Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.144-b01 mixed mode):
 	at sun.nio.ch.IOUtil.read(IOUtil.java:192)
 	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
 	- locked <0x0000000749c859a0> (a java.lang.Object)
-	at readMessages(Connection.java:1810)
+	at org.apache.geode.internal.tcp.Connection.runNioReader(Connection.java:1810)
 	at org.apache.geode.internal.tcp.Connection.run(Connection.java:1690)
 	at java.lang.Thread.run(Thread.java:748)
 
@@ -61,7 +61,7 @@ Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.144-b01 mixed mode):
 	at sun.nio.ch.IOUtil.read(IOUtil.java:192)
 	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
 	- locked <0x0000000749c85768> (a java.lang.Object)
-	at readMessages(Connection.java:1810)
+	at org.apache.geode.internal.tcp.Connection.runNioReader(Connection.java:1810)
 	at org.apache.geode.internal.tcp.Connection.run(Connection.java:1690)
 	at java.lang.Thread.run(Thread.java:748)
 
@@ -176,7 +176,7 @@ Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.144-b01 mixed mode):
 	at sun.nio.ch.IOUtil.read(IOUtil.java:192)
 	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
 	- locked <0x0000000749c84fd0> (a java.lang.Object)
-	at readMessages(Connection.java:1810)
+	at org.apache.geode.internal.tcp.Connection.runNioReader(Connection.java:1810)
 	at org.apache.geode.internal.tcp.Connection.run(Connection.java:1690)
 	at java.lang.Thread.run(Thread.java:748)
 
@@ -1080,7 +1080,7 @@ Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.144-b01 mixed mode):
 	at sun.nio.ch.IOUtil.read(IOUtil.java:192)
 	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
 	- locked <0x00000007435986d8> (a java.lang.Object)
-	at readMessages(Connection.java:1810)
+	at org.apache.geode.internal.tcp.Connection.runNioReader(Connection.java:1810)
 	at org.apache.geode.internal.tcp.Connection.run(Connection.java:1690)
 	at java.lang.Thread.run(Thread.java:748)
 
@@ -1111,7 +1111,7 @@ Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.144-b01 mixed mode):
 	at sun.nio.ch.IOUtil.read(IOUtil.java:192)
 	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
 	- locked <0x00000007435984a0> (a java.lang.Object)
-	at readMessages(Connection.java:1810)
+	at org.apache.geode.internal.tcp.Connection.runNioReader(Connection.java:1810)
 	at org.apache.geode.internal.tcp.Connection.run(Connection.java:1690)
 	at java.lang.Thread.run(Thread.java:748)
 
@@ -1389,7 +1389,7 @@ Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.144-b01 mixed mode):
 	at sun.nio.ch.IOUtil.read(IOUtil.java:192)
 	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
 	- locked <0x00000006479b3c70> (a java.lang.Object)
-	at readMessages(Connection.java:1810)
+	at org.apache.geode.internal.tcp.Connection.runNioReader(Connection.java:1810)
 	at org.apache.geode.internal.tcp.Connection.run(Connection.java:1690)
 	at java.lang.Thread.run(Thread.java:748)
 
@@ -2074,7 +2074,7 @@ Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.144-b01 mixed mode):
 	at sun.nio.ch.IOUtil.read(IOUtil.java:192)
 	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
 	- locked <0x0000000647a66b00> (a java.lang.Object)
-	at readMessages(Connection.java:1810)
+	at org.apache.geode.internal.tcp.Connection.runNioReader(Connection.java:1810)
 	at org.apache.geode.internal.tcp.Connection.run(Connection.java:1690)
 	at java.lang.Thread.run(Thread.java:748)
 
@@ -2108,7 +2108,7 @@ Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.144-b01 mixed mode):
 	at sun.nio.ch.IOUtil.read(IOUtil.java:192)
 	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
 	- locked <0x0000000647a92678> (a java.lang.Object)
-	at readMessages(Connection.java:1810)
+	at org.apache.geode.internal.tcp.Connection.runNioReader(Connection.java:1810)
 	at org.apache.geode.internal.tcp.Connection.run(Connection.java:1690)
 	at java.lang.Thread.run(Thread.java:748)
 
diff --git a/geode-dunit/src/main/java/org/apache/geode/test/dunit/internal/ProcessManager.java b/geode-dunit/src/main/java/org/apache/geode/test/dunit/internal/ProcessManager.java
index d43b644..c0e1237 100755
--- a/geode-dunit/src/main/java/org/apache/geode/test/dunit/internal/ProcessManager.java
+++ b/geode-dunit/src/main/java/org/apache/geode/test/dunit/internal/ProcessManager.java
@@ -159,22 +159,10 @@ class ProcessManager implements ChildVMLauncher {
 
   private void linkStreams(final String version, final int vmNum, final ProcessHolder holder,
       final InputStream in, final PrintStream out) {
-    final String vmName = "[" + VM.getVMName(version, vmNum);
+    final String vmName = "[" + VM.getVMName(version, vmNum) + "] ";
     Thread ioTransport = new Thread() {
       @Override
       public void run() {
-        StringBuffer sb = new StringBuffer();
-        // use low four bytes for backward compatibility
-        long time = System.currentTimeMillis() & 0xffffffffL;
-        for (int i = 0; i < 4; i++) {
-          String hex = Integer.toHexString((int) (time & 0xff));
-          if (hex.length() < 2) {
-            sb.append('0');
-          }
-          sb.append(hex);
-          time = time / 0x100;
-        }
-        String uniqueString = vmName + ", 0x" + sb.toString() + "] ";
         BufferedReader reader = new BufferedReader(new InputStreamReader(in));
         try {
           String line = reader.readLine();
@@ -182,7 +170,7 @@ class ProcessManager implements ChildVMLauncher {
             if (line.length() == 0) {
               out.println();
             } else {
-              out.print(uniqueString);
+              out.print(vmName);
               out.println(line);
             }
             line = reader.readLine();
diff --git a/geode-dunit/src/main/resources/org/apache/geode/server.keystore b/geode-dunit/src/main/resources/org/apache/geode/server.keystore
deleted file mode 100644
index 8b5305f..0000000
Binary files a/geode-dunit/src/main/resources/org/apache/geode/server.keystore and /dev/null differ
diff --git a/geode-wan/src/distributedTest/java/org/apache/geode/internal/cache/wan/WANTestBase.java b/geode-wan/src/distributedTest/java/org/apache/geode/internal/cache/wan/WANTestBase.java
index d365325..1210b23 100644
--- a/geode-wan/src/distributedTest/java/org/apache/geode/internal/cache/wan/WANTestBase.java
+++ b/geode-wan/src/distributedTest/java/org/apache/geode/internal/cache/wan/WANTestBase.java
@@ -2834,6 +2834,7 @@ public class WANTestBase extends DistributedTestCase {
     });
     for (int i = 0; i < regionSize; i++) {
       final int temp = i;
+      logger.info("For Key : Key_" + i + " : Values : " + r.get("Key_" + i));
       await()
           .untilAsserted(() -> assertEquals(
               "keySet = " + r.keySet() + " values() = " + r.values() + "Region Size = " + r.size(),
diff --git a/geode-wan/src/distributedTest/java/org/apache/geode/internal/cache/wan/serial/SerialGatewaySenderDistributedDeadlockDUnitTest.java b/geode-wan/src/distributedTest/java/org/apache/geode/internal/cache/wan/serial/SerialGatewaySenderDistributedDeadlockDUnitTest.java
index 40e984b..0394fb0 100644
--- a/geode-wan/src/distributedTest/java/org/apache/geode/internal/cache/wan/serial/SerialGatewaySenderDistributedDeadlockDUnitTest.java
+++ b/geode-wan/src/distributedTest/java/org/apache/geode/internal/cache/wan/serial/SerialGatewaySenderDistributedDeadlockDUnitTest.java
@@ -30,6 +30,7 @@ import org.apache.geode.cache.execute.RegionFunctionContext;
 import org.apache.geode.distributed.DistributedSystem;
 import org.apache.geode.internal.cache.wan.WANTestBase;
 import org.apache.geode.test.dunit.AsyncInvocation;
+import org.apache.geode.test.dunit.Wait;
 import org.apache.geode.test.junit.categories.WanTest;
 
 // The tests here are to validate changes introduced because a distributed deadlock
@@ -69,13 +70,13 @@ public class SerialGatewaySenderDistributedDeadlockDUnitTest extends WANTestBase
     // exercise region and gateway operations with different messaging
     exerciseWANOperations();
     AsyncInvocation invVM4transaction =
-        vm4.invokeAsync("doTxPutsAsync", () -> WANTestBase.doTxPuts(getTestMethodName() + "_RR"));
+        vm4.invokeAsync(() -> WANTestBase.doTxPuts(getTestMethodName() + "_RR"));
     AsyncInvocation invVM5transaction =
-        vm5.invokeAsync("doTxPutsAsync", () -> WANTestBase.doTxPuts(getTestMethodName() + "_RR"));
+        vm5.invokeAsync(() -> WANTestBase.doTxPuts(getTestMethodName() + "_RR"));
     AsyncInvocation invVM4 =
-        vm4.invokeAsync("doPutsAsync", () -> WANTestBase.doPuts(getTestMethodName() + "_RR", 1000));
+        vm4.invokeAsync(() -> WANTestBase.doPuts(getTestMethodName() + "_RR", 1000));
     AsyncInvocation invVM5 =
-        vm5.invokeAsync("doPutsAsync", () -> WANTestBase.doPuts(getTestMethodName() + "_RR", 1000));
+        vm5.invokeAsync(() -> WANTestBase.doPuts(getTestMethodName() + "_RR", 1000));
 
     exerciseFunctions();
 
@@ -94,11 +95,9 @@ public class SerialGatewaySenderDistributedDeadlockDUnitTest extends WANTestBase
   // Uses partitioned regions and conserve-sockets=false
   @Test
   public void testPrimarySendersOnDifferentVMsPR() throws Exception {
-    Integer lnPort =
-        (Integer) vm0.invoke("createFirstPeerLocator", () -> WANTestBase.createFirstPeerLocator(1));
+    Integer lnPort = (Integer) vm0.invoke(() -> WANTestBase.createFirstPeerLocator(1));
 
-    Integer nyPort = (Integer) vm1.invoke("createFirstRemoteLocator",
-        () -> WANTestBase.createFirstRemoteLocator(2, lnPort));
+    Integer nyPort = (Integer) vm1.invoke(() -> WANTestBase.createFirstRemoteLocator(2, lnPort));
 
     createCachesWith(Boolean.FALSE, nyPort, lnPort);
 
@@ -110,34 +109,37 @@ public class SerialGatewaySenderDistributedDeadlockDUnitTest extends WANTestBase
 
     exerciseWANOperations();
     AsyncInvocation invVM4transaction =
-        vm4.invokeAsync("doTxPutsPRAsync", () -> SerialGatewaySenderDistributedDeadlockDUnitTest
+        vm4.invokeAsync(() -> SerialGatewaySenderDistributedDeadlockDUnitTest
             .doTxPutsPR(getTestMethodName() + "_RR", 100, 1000));
     AsyncInvocation invVM5transaction =
-        vm5.invokeAsync("doTxPutsPRAsync", () -> SerialGatewaySenderDistributedDeadlockDUnitTest
+        vm5.invokeAsync(() -> SerialGatewaySenderDistributedDeadlockDUnitTest
             .doTxPutsPR(getTestMethodName() + "_RR", 100, 1000));
 
     AsyncInvocation invVM4 =
-        vm4.invokeAsync("doPutsAsync", () -> WANTestBase.doPuts(getTestMethodName() + "_RR", 1000));
+        vm4.invokeAsync(() -> WANTestBase.doPuts(getTestMethodName() + "_RR", 1000));
     AsyncInvocation invVM5 =
-        vm5.invokeAsync("doPutsAsync", () -> WANTestBase.doPuts(getTestMethodName() + "_RR", 1000));
+        vm5.invokeAsync(() -> WANTestBase.doPuts(getTestMethodName() + "_RR", 1000));
 
     exerciseFunctions();
 
-    invVM4transaction.join();
-    invVM5transaction.join();
-    invVM4.join();
-    invVM5.join();
+    try {
+      invVM4transaction.join();
+      invVM5transaction.join();
+      invVM4.join();
+      invVM5.join();
+    } catch (InterruptedException e) {
+      e.printStackTrace();
+      fail();
+    }
   }
 
   // Uses replicated regions and conserve-sockets=true
   @Test
   public void testPrimarySendersOnDifferentVMsReplicatedSocketPolicy() throws Exception {
 
-    Integer lnPort =
-        (Integer) vm0.invoke("createFirstPeerLocator", () -> WANTestBase.createFirstPeerLocator(1));
+    Integer lnPort = (Integer) vm0.invoke(() -> WANTestBase.createFirstPeerLocator(1));
 
-    Integer nyPort = (Integer) vm1.invoke("createFirstRemoteLocator",
-        () -> WANTestBase.createFirstRemoteLocator(2, lnPort));
+    Integer nyPort = (Integer) vm1.invoke(() -> WANTestBase.createFirstRemoteLocator(2, lnPort));
 
     createCachesWith(Boolean.TRUE, nyPort, lnPort);
 
@@ -152,32 +154,35 @@ public class SerialGatewaySenderDistributedDeadlockDUnitTest extends WANTestBase
     // exercise region and gateway operations with messaging
     exerciseWANOperations();
     AsyncInvocation invVM4transaction =
-        vm4.invokeAsync("doTxPutsAsync", () -> WANTestBase.doTxPuts(getTestMethodName() + "_RR"));
+        vm4.invokeAsync(() -> WANTestBase.doTxPuts(getTestMethodName() + "_RR"));
     AsyncInvocation invVM5transaction =
-        vm5.invokeAsync("doTxPutsAsync", () -> WANTestBase.doTxPuts(getTestMethodName() + "_RR"));
+        vm5.invokeAsync(() -> WANTestBase.doTxPuts(getTestMethodName() + "_RR"));
 
     AsyncInvocation invVM4 =
-        vm4.invokeAsync("doPutsAsync", () -> WANTestBase.doPuts(getTestMethodName() + "_RR", 1000));
+        vm4.invokeAsync(() -> WANTestBase.doPuts(getTestMethodName() + "_RR", 1000));
     AsyncInvocation invVM5 =
-        vm5.invokeAsync("doPutsAsync", () -> WANTestBase.doPuts(getTestMethodName() + "_RR", 1000));
+        vm5.invokeAsync(() -> WANTestBase.doPuts(getTestMethodName() + "_RR", 1000));
 
     exerciseFunctions();
 
-    invVM4transaction.join();
-    invVM5transaction.join();
-    invVM4.join();
-    invVM5.join();
+    try {
+      invVM4transaction.join();
+      invVM5transaction.join();
+      invVM4.join();
+      invVM5.join();
+    } catch (InterruptedException e) {
+      e.printStackTrace();
+      fail();
+    }
   }
 
   // Uses partitioned regions and conserve-sockets=true
   // this always causes a distributed deadlock
   @Test
   public void testPrimarySendersOnDifferentVMsPRSocketPolicy() throws Exception {
-    Integer lnPort =
-        (Integer) vm0.invoke("createFirstPeerLocator", () -> WANTestBase.createFirstPeerLocator(1));
+    Integer lnPort = (Integer) vm0.invoke(() -> WANTestBase.createFirstPeerLocator(1));
 
-    Integer nyPort = (Integer) vm1.invoke("createFirstRemoteLocator",
-        () -> WANTestBase.createFirstRemoteLocator(2, lnPort));
+    Integer nyPort = (Integer) vm1.invoke(() -> WANTestBase.createFirstRemoteLocator(2, lnPort));
 
     createCachesWith(Boolean.TRUE, nyPort, lnPort);
 
@@ -189,23 +194,28 @@ public class SerialGatewaySenderDistributedDeadlockDUnitTest extends WANTestBase
 
     exerciseWANOperations();
     AsyncInvocation invVM4transaction =
-        vm4.invokeAsync("doTxPutsPRAsync", () -> SerialGatewaySenderDistributedDeadlockDUnitTest
+        vm4.invokeAsync(() -> SerialGatewaySenderDistributedDeadlockDUnitTest
             .doTxPutsPR(getTestMethodName() + "_RR", 100, 1000));
     AsyncInvocation invVM5transaction =
-        vm5.invokeAsync("doTxPutsPRAsync", () -> SerialGatewaySenderDistributedDeadlockDUnitTest
+        vm5.invokeAsync(() -> SerialGatewaySenderDistributedDeadlockDUnitTest
             .doTxPutsPR(getTestMethodName() + "_RR", 100, 1000));
 
     AsyncInvocation invVM4 =
-        vm4.invokeAsync("doPutsAsync", () -> WANTestBase.doPuts(getTestMethodName() + "_RR", 1000));
+        vm4.invokeAsync(() -> WANTestBase.doPuts(getTestMethodName() + "_RR", 1000));
     AsyncInvocation invVM5 =
-        vm5.invokeAsync("doPutsAsync", () -> WANTestBase.doPuts(getTestMethodName() + "_RR", 1000));
+        vm5.invokeAsync(() -> WANTestBase.doPuts(getTestMethodName() + "_RR", 1000));
 
     exerciseFunctions();
 
-    invVM4transaction.join();
-    invVM5transaction.join();
-    invVM4.join();
-    invVM5.join();
+    try {
+      invVM4transaction.join();
+      invVM5transaction.join();
+      invVM4.join();
+      invVM5.join();
+    } catch (InterruptedException e) {
+      e.printStackTrace();
+      fail();
+    }
   }
 
   // **************************************************************************
@@ -213,64 +223,58 @@ public class SerialGatewaySenderDistributedDeadlockDUnitTest extends WANTestBase
   // **************************************************************************
   private void createReplicatedRegions(Integer nyPort) throws Exception {
     // create receiver
-    vm2.invoke("createReplicatedRegion",
-        () -> WANTestBase.createReplicatedRegion(getTestMethodName() + "_RR", null, false));
-    vm2.invoke("createReceiver", () -> WANTestBase.createReceiver());
+    vm2.invoke(() -> WANTestBase.createReplicatedRegion(getTestMethodName() + "_RR", null, false));
+    vm2.invoke(() -> WANTestBase.createReceiver());
 
     // create senders
-    vm4.invoke("createReplicatedRegion",
+    vm4.invoke(
         () -> WANTestBase.createReplicatedRegion(getTestMethodName() + "_RR", "ln1,ln2", false));
 
-    vm5.invoke("createReplicatedRegion",
+    vm5.invoke(
         () -> WANTestBase.createReplicatedRegion(getTestMethodName() + "_RR", "ln1,ln2", false));
   }
 
   private void createCachesWith(Boolean socketPolicy, Integer nyPort, Integer lnPort) {
-    vm2.invoke("createCacheConserveSockets",
-        () -> WANTestBase.createCacheConserveSockets(socketPolicy, nyPort));
+    vm2.invoke(() -> WANTestBase.createCacheConserveSockets(socketPolicy, nyPort));
 
-    vm4.invoke("createCacheConserveSockets",
-        () -> WANTestBase.createCacheConserveSockets(socketPolicy, lnPort));
+    vm4.invoke(() -> WANTestBase.createCacheConserveSockets(socketPolicy, lnPort));
 
-    vm5.invoke("createCacheConserveSockets",
-        () -> WANTestBase.createCacheConserveSockets(socketPolicy, lnPort));
+    vm5.invoke(() -> WANTestBase.createCacheConserveSockets(socketPolicy, lnPort));
   }
 
   private void exerciseFunctions() throws Exception {
     // do function calls that use a shared connection
     for (int x = 0; x < 1000; x++) {
       // setting it to Boolean.TRUE it should pass the test
-      vm4.invoke("doFunctionPuts", () -> SerialGatewaySenderDistributedDeadlockDUnitTest
+      vm4.invoke(() -> SerialGatewaySenderDistributedDeadlockDUnitTest
           .doFunctionPuts(getTestMethodName() + "_RR", 1, Boolean.TRUE));
-      vm5.invoke("doFunctionPuts", () -> SerialGatewaySenderDistributedDeadlockDUnitTest
+      vm5.invoke(() -> SerialGatewaySenderDistributedDeadlockDUnitTest
           .doFunctionPuts(getTestMethodName() + "_RR", 1, Boolean.TRUE));
     }
     for (int x = 0; x < 1000; x++) {
       // setting the Boolean.FALSE below will cause a deadlock in some GFE versions
       // setting it to Boolean.TRUE as above it should pass the test
       // this is similar to the customer found distributed deadlock
-      vm4.invoke("doFunctionPuts2", () -> SerialGatewaySenderDistributedDeadlockDUnitTest
+      vm4.invoke(() -> SerialGatewaySenderDistributedDeadlockDUnitTest
           .doFunctionPuts(getTestMethodName() + "_RR", 1, Boolean.FALSE));
-      vm5.invoke("doFunctionPuts2", () -> SerialGatewaySenderDistributedDeadlockDUnitTest
+      vm5.invoke(() -> SerialGatewaySenderDistributedDeadlockDUnitTest
           .doFunctionPuts(getTestMethodName() + "_RR", 1, Boolean.FALSE));
     }
   }
 
   private void createPartitionedRegions(Integer nyPort) throws Exception {
     // create remote receiver
-    vm2.invoke("createPartitionedRegion",
+    vm2.invoke(
         () -> WANTestBase.createPartitionedRegion(getTestMethodName() + "_RR", "", 0, 113, false));
 
-    vm2.invoke("createReceiver", () -> WANTestBase.createReceiver());
+    vm2.invoke(() -> WANTestBase.createReceiver());
 
     // create sender vms
-    vm4.invoke("createPartitionedRegion",
-        () -> WANTestBase.createPartitionedRegion(getTestMethodName() + "_RR", "ln1,ln2", 1,
-            113, false));
+    vm4.invoke(() -> WANTestBase.createPartitionedRegion(getTestMethodName() + "_RR", "ln1,ln2", 1,
+        113, false));
 
-    vm5.invoke("createPartitionedRegion",
-        () -> WANTestBase.createPartitionedRegion(getTestMethodName() + "_RR", "ln1,ln2", 1,
-            113, false));
+    vm5.invoke(() -> WANTestBase.createPartitionedRegion(getTestMethodName() + "_RR", "ln1,ln2", 1,
+        113, false));
   }
 
   private void exerciseWANOperations() throws Exception {
@@ -278,85 +282,60 @@ public class SerialGatewaySenderDistributedDeadlockDUnitTest extends WANTestBase
     // messaging between the WAN gateways and members
 
     // exercise region and gateway operations
-    vm4.invoke("exerciseWANOperations.doPuts",
-        () -> WANTestBase.doPuts(getTestMethodName() + "_RR", 100));
-    vm5.invoke("exerciseWANOperations.doPuts",
-        () -> WANTestBase.doPuts(getTestMethodName() + "_RR", 100));
-    vm4.invoke("exerciseWANOperations.validateRegionSize",
-        () -> WANTestBase.validateRegionSize(getTestMethodName() + "_RR", 100));
-    vm2.invoke("exerciseWANOperations.validateRegionSize",
-        () -> WANTestBase.validateRegionSize(getTestMethodName() + "_RR", 100));
-
-    vm5.invoke("exerciseWANOperations.doDestroys",
-        () -> WANTestBase.doDestroys(getTestMethodName() + "_RR", 100));
-    vm5.invoke("exerciseWANOperations.validateRegionSize",
-        () -> WANTestBase.validateRegionSize(getTestMethodName() + "_RR", 0));
-    vm2.invoke("exerciseWANOperations.validateRegionSize",
-        () -> WANTestBase.validateRegionSize(getTestMethodName() + "_RR", 0));
-
-    vm4.invoke("exerciseWANOperations.doPuts2",
-        () -> WANTestBase.doPuts(getTestMethodName() + "_RR", 100));
-    vm5.invoke("exerciseWANOperations.doPuts2",
-        () -> WANTestBase.doPuts(getTestMethodName() + "_RR", 100));
-    vm4.invoke("exerciseWANOperations.validateRegionSize",
-        () -> WANTestBase.validateRegionSize(getTestMethodName() + "_RR", 100));
-    vm2.invoke("exerciseWANOperations.validateRegionSize",
-        () -> WANTestBase.validateRegionSize(getTestMethodName() + "_RR", 100));
-
-    vm4.invoke("exerciseWANOperations.doInvalidates",
-        () -> SerialGatewaySenderDistributedDeadlockDUnitTest
-            .doInvalidates(getTestMethodName() + "_RR", 100, 100));
-    vm4.invoke("exerciseWANOperations.doPutAll",
-        () -> WANTestBase.doPutAll(getTestMethodName() + "_RR", 100, 10));
-    vm5.invoke("exerciseWANOperations.doPutAll",
-        () -> WANTestBase.doPutAll(getTestMethodName() + "_RR", 100, 10));
-    vm4.invoke("exerciseWANOperations.validateRegionSize",
-        () -> WANTestBase.validateRegionSize(getTestMethodName() + "_RR", 1000));
-    vm2.invoke("exerciseWANOperations.validateRegionSize",
-        () -> WANTestBase.validateRegionSize(getTestMethodName() + "_RR", 1000));
-
-    vm4.invoke("exerciseWANOperations.doDestroys",
-        () -> WANTestBase.doDestroys(getTestMethodName() + "_RR", 1000));
-    vm5.invoke("exerciseWANOperations.validateRegionSize",
-        () -> WANTestBase.validateRegionSize(getTestMethodName() + "_RR", 0));
-    vm2.invoke("exerciseWANOperations.validateRegionSize",
-        () -> WANTestBase.validateRegionSize(getTestMethodName() + "_RR", 0));
-
-    vm4.invoke("exerciseWANOperations.doPutsPDXSerializable",
-        () -> WANTestBase.doPutsPDXSerializable(getTestMethodName() + "_RR", 100));
-    vm5.invoke("exerciseWANOperations.validateRegionSize_PDX",
-        () -> WANTestBase.validateRegionSize_PDX(getTestMethodName() + "_RR", 100));
-    vm2.invoke("exerciseWANOperations.validateRegionSize_PDX",
-        () -> WANTestBase.validateRegionSize_PDX(getTestMethodName() + "_RR", 100));
+    vm4.invoke(() -> WANTestBase.doPuts(getTestMethodName() + "_RR", 100));
+    vm5.invoke(() -> WANTestBase.doPuts(getTestMethodName() + "_RR", 100));
+    Wait.pause(2000); // wait for events to propagate
+    vm4.invoke(() -> WANTestBase.validateRegionSize(getTestMethodName() + "_RR", 100));
+    vm2.invoke(() -> WANTestBase.validateRegionSize(getTestMethodName() + "_RR", 100));
+    vm5.invoke(() -> WANTestBase.doDestroys(getTestMethodName() + "_RR", 100));
+    Wait.pause(2000);// wait for events to propagate
+    vm5.invoke(() -> WANTestBase.validateRegionSize(getTestMethodName() + "_RR", 0));
+    vm2.invoke(() -> WANTestBase.validateRegionSize(getTestMethodName() + "_RR", 0));
+    vm4.invoke(() -> WANTestBase.doPuts(getTestMethodName() + "_RR", 100));
+    vm5.invoke(() -> WANTestBase.doPuts(getTestMethodName() + "_RR", 100));
+    Wait.pause(2000); // wait for events to propagate
+    vm4.invoke(() -> WANTestBase.validateRegionSize(getTestMethodName() + "_RR", 100));
+    vm2.invoke(() -> WANTestBase.validateRegionSize(getTestMethodName() + "_RR", 100));
+    vm4.invoke(() -> SerialGatewaySenderDistributedDeadlockDUnitTest
+        .doInvalidates(getTestMethodName() + "_RR", 100, 100));
+    vm4.invoke(() -> WANTestBase.doPutAll(getTestMethodName() + "_RR", 100, 10));
+    vm5.invoke(() -> WANTestBase.doPutAll(getTestMethodName() + "_RR", 100, 10));
+    Wait.pause(2000);// wait for events to propagate
+    vm4.invoke(() -> WANTestBase.validateRegionSize(getTestMethodName() + "_RR", 1000));
+    vm2.invoke(() -> WANTestBase.validateRegionSize(getTestMethodName() + "_RR", 1000));
+    vm4.invoke(() -> WANTestBase.doDestroys(getTestMethodName() + "_RR", 1000));
+    Wait.pause(2000);// wait for events to propagate
+    vm5.invoke(() -> WANTestBase.validateRegionSize(getTestMethodName() + "_RR", 0));
+    vm2.invoke(() -> WANTestBase.validateRegionSize(getTestMethodName() + "_RR", 0));
+    vm4.invoke(() -> WANTestBase.doPutsPDXSerializable(getTestMethodName() + "_RR", 100));
+    Wait.pause(2000);
+    vm5.invoke(() -> WANTestBase.validateRegionSize_PDX(getTestMethodName() + "_RR", 100));
+    vm2.invoke(() -> WANTestBase.validateRegionSize_PDX(getTestMethodName() + "_RR", 100));
   }
 
   private void startSerialSenders() throws Exception {
     // get one primary sender on vm4 and another primary on vm5
     // the startup order matters here so that primaries are
     // on different JVM's
-    vm4.invoke("start primary sender", () -> WANTestBase.startSender("ln1"));
+    vm4.invoke(() -> WANTestBase.startSender("ln1"));
 
-    vm5.invoke("start primary sender", () -> WANTestBase.startSender("ln2"));
+    vm5.invoke(() -> WANTestBase.startSender("ln2"));
 
     // start secondaries
-    vm5.invoke("start secondary sender", () -> WANTestBase.startSender("ln1"));
+    vm5.invoke(() -> WANTestBase.startSender("ln1"));
 
-    vm4.invoke("start secondary sender", () -> WANTestBase.startSender("ln2"));
+    vm4.invoke(() -> WANTestBase.startSender("ln2"));
   }
 
   private void createSerialSenders() throws Exception {
 
-    vm4.invoke("create primary sender",
-        () -> WANTestBase.createSender("ln1", 2, false, 100, 10, false, false, null, true));
+    vm4.invoke(() -> WANTestBase.createSender("ln1", 2, false, 100, 10, false, false, null, true));
 
-    vm5.invoke("create secondary sender",
-        () -> WANTestBase.createSender("ln1", 2, false, 100, 10, false, false, null, true));
+    vm5.invoke(() -> WANTestBase.createSender("ln1", 2, false, 100, 10, false, false, null, true));
 
-    vm4.invoke("create primary sender",
-        () -> WANTestBase.createSender("ln2", 2, false, 100, 10, false, false, null, true));
+    vm4.invoke(() -> WANTestBase.createSender("ln2", 2, false, 100, 10, false, false, null, true));
 
-    vm5.invoke("create secondary sender",
-        () -> WANTestBase.createSender("ln2", 2, false, 100, 10, false, false, null, true));
+    vm5.invoke(() -> WANTestBase.createSender("ln2", 2, false, 100, 10, false, false, null, true));
   }
 
   public static void doFunctionPuts(String name, int num, Boolean useThreadOwnedSocket)


[geode] 18/21: Keep newest packer but install specific version as well. (#3611)

Posted by on...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

onichols pushed a commit to branch release/1.9.1
in repository https://gitbox.apache.org/repos/asf/geode.git

commit 6ad1635ed7315ba9ff630be9f0c5b08067764be5
Author: Sean Goller <sg...@pivotal.io>
AuthorDate: Tue May 21 10:32:04 2019 -0700

    Keep newest packer but install specific version as well. (#3611)
    
    Authored-by: Sean Goller <sg...@pivotal.io>
    (cherry picked from commit bb8bf53953bce591427c578bf61335630d4490fc)
---
 ci/images/alpine-tools/Dockerfile | 1 +
 1 file changed, 1 insertion(+)

diff --git a/ci/images/alpine-tools/Dockerfile b/ci/images/alpine-tools/Dockerfile
index 7d86443..bc08fc2 100644
--- a/ci/images/alpine-tools/Dockerfile
+++ b/ci/images/alpine-tools/Dockerfile
@@ -17,6 +17,7 @@ FROM openjdk:8-jdk-alpine
 
 COPY --from=google/cloud-sdk:alpine /google-cloud-sdk /google-cloud-sdk
 COPY --from=hashicorp/packer:latest /bin/packer /usr/local/bin/packer
+COPY --from=hashicorp/packer:1.3.5 /bin/packer /usr/local/bin/packer135
 ENV PATH /google-cloud-sdk/bin:$PATH
 RUN apk --no-cache add \
       bash \


[geode] 13/21: Remove hopefully now spurious line. (#3612)

Posted by on...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

onichols pushed a commit to branch release/1.9.1
in repository https://gitbox.apache.org/repos/asf/geode.git

commit 86d07650b5f6b228cb2444a5dcec8db16cc6ac90
Author: Sean Goller <sg...@pivotal.io>
AuthorDate: Tue May 21 11:30:26 2019 -0700

    Remove hopefully now spurious line. (#3612)
    
    Authored-by: Sean Goller <sg...@pivotal.io>
    (cherry picked from commit 9ceb83c2ce18c0c60d928f94432b0c15158805a0)
---
 ci/images/google-windows-geode-builder/windows-packer.json | 1 -
 1 file changed, 1 deletion(-)

diff --git a/ci/images/google-windows-geode-builder/windows-packer.json b/ci/images/google-windows-geode-builder/windows-packer.json
index bb75405..d2c96f7 100644
--- a/ci/images/google-windows-geode-builder/windows-packer.json
+++ b/ci/images/google-windows-geode-builder/windows-packer.json
@@ -64,7 +64,6 @@
         "$OldPath = (Get-ItemProperty -Path 'Registry::HKEY_LOCAL_MACHINE\\System\\CurrentControlSet\\Control\\Session Manager\\Environment' -Name PATH).Path",
         "$NewPath = $OldPath + ';' + 'c:\\Program Files\\Git\\bin'",
         "Set-ItemProperty -Path 'Registry::HKEY_LOCAL_MACHINE\\System\\CurrentControlSet\\Control\\Session Manager\\Environment' -Name PATH -Value $NewPath",
-        "Install-Module -Name ProcessMitigations -Force",
         "Get-ChildItem -Path \"C:\\Program Files\\Git\\bin\" -Recurse -Include *exe | %{ Set-ProcessMitigation -Name $_.Name -Disable ForceRelocateASLR,ForceRelocate }",
         "Get-ChildItem -Path \"C:\\ProgramData\\chocolatey\" -Recurse -Include *exe | %{ Set-ProcessMitigation -Name $_.Name -Disable ForceRelocateASLR,ForceRelocate }",
 


[geode] 16/21: Add flags to allow for local building. (#3830)

Posted by on...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

onichols pushed a commit to branch release/1.9.1
in repository https://gitbox.apache.org/repos/asf/geode.git

commit 1668eb62b277fcf63a8db0f70ec9bfc3fb1bd82a
Author: Sean Goller <sg...@pivotal.io>
AuthorDate: Mon Jul 22 16:33:35 2019 -0700

    Add flags to allow for local building. (#3830)
    
    Authored-by: Sean Goller <sg...@pivotal.io>
    (cherry picked from commit 9c99e1bcb99ddf8c3ee8b16062e2e59924262a8a)
---
 ci/images/google-windows-geode-builder/build_image.sh | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/ci/images/google-windows-geode-builder/build_image.sh b/ci/images/google-windows-geode-builder/build_image.sh
index 7664642..8c6c3cd 100755
--- a/ci/images/google-windows-geode-builder/build_image.sh
+++ b/ci/images/google-windows-geode-builder/build_image.sh
@@ -17,6 +17,8 @@
 # limitations under the License.
 set -x
 
+PACKER=${PACKER:-packer135}
+INTERNAL=${INTERNAL:-true}
 SOURCE="${BASH_SOURCE[0]}"
 while [ -h "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symlink
   SCRIPTDIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
@@ -46,10 +48,15 @@ if [[ -n "${MY_NAME}" ]]; then
   GCP_SUBNETWORK=${GCP_SUBNETWORK##*/}
 fi
 
+if [[ -z "${GCP_PROJECT}" ]]; then
+  echo "GCP_PROJECT is unset. Cowardly refusing to continue."
+  exit 1
+fi
+
 HASHED_PIPELINE_PREFIX="i$(uuidgen -n @dns -s -N "${PIPELINE_PREFIX}")-"
 
 echo "Running packer"
-PACKER_LOG=1 packer build \
+PACKER_LOG=1 ${PACKER} build \
   --var "geode_docker_image=${GEODE_DOCKER_IMAGE}" \
   --var "pipeline_prefix=${PIPELINE_PREFIX}" \
   --var "hashed_pipeline_prefix=${HASHED_PIPELINE_PREFIX}" \
@@ -57,5 +64,5 @@ PACKER_LOG=1 packer build \
   --var "gcp_project=${GCP_PROJECT}" \
   --var "gcp_network=${GCP_NETWORK}" \
   --var "gcp_subnetwork=${GCP_SUBNETWORK}" \
-  --var "use_internal_ip=true" \
+  --var "use_internal_ip=${INTERNAL}" \
   windows-packer.json


[geode] 09/21: GEODE-6959: Prevent NPE in GMSMembershipManager for null AlertAppender (#3899)

Posted by on...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

onichols pushed a commit to branch release/1.9.1
in repository https://gitbox.apache.org/repos/asf/geode.git

commit ff230945e5276499504a8ef81a3b91f28846714a
Author: Kirk Lund <kl...@apache.org>
AuthorDate: Thu Aug 8 14:59:44 2019 -0700

    GEODE-6959: Prevent NPE in GMSMembershipManager for null AlertAppender (#3899)
    
    If a custom log4j2.xml is used without specifying the Geode AlertAppender,
    GMSMembershipManager may throw a NullPointerException when invoking
    AlertAppender.getInstance().stopSession() during a forceDisconnect. This
    change prevents the NullPointerException allowing forceDisconnect to finish.
    
    Users using Spring Boot with Logback are more likely to hit this bug.
    
    Co-authored-by: Mark Hanson mhanson@pivotal.io
    (cherry picked from commit dd15fec1f2ecbc3bc0cdfc42072252c379e0bb89)
---
 .../membership/gms/mgr/GMSMembershipManager.java   |  2 +-
 .../internal/logging/log4j/AlertAppender.java      | 16 ++++++++++++-
 .../internal/logging/log4j/AlertAppenderTest.java  | 26 ++++++++++++++++++++++
 3 files changed, 42 insertions(+), 2 deletions(-)

diff --git a/geode-core/src/main/java/org/apache/geode/distributed/internal/membership/gms/mgr/GMSMembershipManager.java b/geode-core/src/main/java/org/apache/geode/distributed/internal/membership/gms/mgr/GMSMembershipManager.java
index e0b554e..680fc53 100644
--- a/geode-core/src/main/java/org/apache/geode/distributed/internal/membership/gms/mgr/GMSMembershipManager.java
+++ b/geode-core/src/main/java/org/apache/geode/distributed/internal/membership/gms/mgr/GMSMembershipManager.java
@@ -2565,7 +2565,7 @@ public class GMSMembershipManager implements MembershipManager, Manager {
     services.setShutdownCause(shutdownCause);
     services.getCancelCriterion().cancel(reason);
 
-    AlertAppender.getInstance().stopSession();
+    AlertAppender.stopSessionIfRunning();
 
     if (!inhibitForceDisconnectLogging) {
       logger.fatal(
diff --git a/geode-core/src/main/java/org/apache/geode/internal/logging/log4j/AlertAppender.java b/geode-core/src/main/java/org/apache/geode/internal/logging/log4j/AlertAppender.java
index d56a0da..40332da 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/logging/log4j/AlertAppender.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/logging/log4j/AlertAppender.java
@@ -36,6 +36,7 @@ import org.apache.logging.log4j.core.config.plugins.Plugin;
 import org.apache.logging.log4j.core.config.plugins.PluginBuilderAttribute;
 import org.apache.logging.log4j.core.config.plugins.PluginBuilderFactory;
 
+import org.apache.geode.annotations.VisibleForTesting;
 import org.apache.geode.annotations.internal.MakeNotStatic;
 import org.apache.geode.distributed.DistributedMember;
 import org.apache.geode.internal.alerting.AlertLevel;
@@ -337,7 +338,20 @@ public class AlertAppender extends AbstractAppender
     return listeners;
   }
 
-  public static AlertAppender getInstance() {
+  @VisibleForTesting
+  static AlertAppender getInstance() {
     return instanceRef.get();
   }
+
+  @VisibleForTesting
+  static void setInstance(AlertAppender alertAppender) {
+    instanceRef.set(alertAppender);
+  }
+
+  public static void stopSessionIfRunning() {
+    AlertAppender instance = instanceRef.get();
+    if (instance != null) {
+      instance.stopSession();
+    }
+  }
 }
diff --git a/geode-core/src/test/java/org/apache/geode/internal/logging/log4j/AlertAppenderTest.java b/geode-core/src/test/java/org/apache/geode/internal/logging/log4j/AlertAppenderTest.java
index 0ba7e45..6779b20 100644
--- a/geode-core/src/test/java/org/apache/geode/internal/logging/log4j/AlertAppenderTest.java
+++ b/geode-core/src/test/java/org/apache/geode/internal/logging/log4j/AlertAppenderTest.java
@@ -15,9 +15,13 @@
 package org.apache.geode.internal.logging.log4j;
 
 import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.assertThatCode;
 import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.spy;
+import static org.mockito.Mockito.verify;
 
 import org.apache.logging.log4j.Level;
+import org.junit.After;
 import org.junit.Before;
 import org.junit.Rule;
 import org.junit.Test;
@@ -52,6 +56,11 @@ public class AlertAppenderTest {
     asAlertingProvider = alertAppender;
   }
 
+  @After
+  public void tearDown() {
+    AlertAppender.setInstance(null);
+  }
+
   @Test
   public void alertListenersIsEmptyByDefault() {
     assertThat(alertAppender.getAlertListeners()).isEmpty();
@@ -164,4 +173,21 @@ public class AlertAppenderTest {
     assertThat(alertAppender.getAlertListeners()).containsExactly(listener1, listener2,
         listener3);
   }
+
+  @Test
+  public void stopSessionIfRunningDoesNotThrowIfReferenceIsNull() {
+    AlertAppender.setInstance(null);
+
+    assertThatCode(AlertAppender::stopSessionIfRunning).doesNotThrowAnyException();
+  }
+
+  @Test
+  public void stopSessionIfRunningStopCurrentInstance() {
+    alertAppender = spy(alertAppender);
+    AlertAppender.setInstance(alertAppender);
+
+    AlertAppender.stopSessionIfRunning();
+
+    verify(alertAppender).stopSession();
+  }
 }


[geode] 11/21: move JDK11 testing from OpenJDK to AdoptOpenJDK going forward

Posted by on...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

onichols pushed a commit to branch release/1.9.1
in repository https://gitbox.apache.org/repos/asf/geode.git

commit 9ca4dca739be91562db115a0a61db8aef6500a8a
Author: Owen Nichols <on...@pivotal.io>
AuthorDate: Thu May 9 16:21:57 2019 -0700

    move JDK11 testing from OpenJDK to AdoptOpenJDK going forward
    
    (cherry picked from commit 874e4c82a5f72d4086d1982888a3bc4db48d20b7)
---
 ci/images/google-windows-geode-builder/windows-packer.json | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/ci/images/google-windows-geode-builder/windows-packer.json b/ci/images/google-windows-geode-builder/windows-packer.json
index 2c6ac4b..6bee8d6 100644
--- a/ci/images/google-windows-geode-builder/windows-packer.json
+++ b/ci/images/google-windows-geode-builder/windows-packer.json
@@ -42,8 +42,8 @@
         "Set-ExecutionPolicy Bypass -Scope Process -Force",
 
         "Invoke-WebRequest https://chocolatey.org/install.ps1 -UseBasicParsing | Invoke-Expression",
-        "choco install -y git rsync openjdk",
-        "Move-Item \"C:\\Program Files\\OpenJDK\\jdk-11*\" c:\\java11",
+        "choco install -y git rsync adoptopenjdk11",
+        "Move-Item \"C:\\Program Files\\AdoptOpenJDK\\jdk-11*\" c:\\java11",
         "choco install -y jdk8 -params 'installdir=c:\\\\java8tmp;source=false'",
         "Move-Item \"C:\\java8tmp\" c:\\java8",
         "choco install -y openssh --version 7.7.2.1 /SSHServerFeature",


[geode] 12/21: Update windows source image family. (#3602)

Posted by on...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

onichols pushed a commit to branch release/1.9.1
in repository https://gitbox.apache.org/repos/asf/geode.git

commit 4e9171f20a1fc1fa8920f1bc672d9066a71e7e8e
Author: Sean Goller <sg...@pivotal.io>
AuthorDate: Fri May 17 11:52:17 2019 -0700

    Update windows source image family. (#3602)
    
    Authored-by: Sean Goller <sg...@pivotal.io>
    (cherry picked from commit df9a3c0149f792569df6a7853d53740b406a286e)
---
 ci/images/google-windows-geode-builder/windows-packer.json | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ci/images/google-windows-geode-builder/windows-packer.json b/ci/images/google-windows-geode-builder/windows-packer.json
index 6bee8d6..bb75405 100644
--- a/ci/images/google-windows-geode-builder/windows-packer.json
+++ b/ci/images/google-windows-geode-builder/windows-packer.json
@@ -17,7 +17,7 @@
       "project_id": "{{user `gcp_project`}}",
       "network": "{{user `gcp_network`}}",
       "subnetwork": "{{user `gcp_subnetwork`}}",
-      "source_image_family": "windows-1709-core-for-containers",
+      "source_image_family": "windows-1809-core-for-containers",
       "disk_size": "100",
       "machine_type": "n1-standard-1",
       "communicator": "winrm",


[geode] 17/21: Fix packer configuration for windows image. (#3837)

Posted by on...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

onichols pushed a commit to branch release/1.9.1
in repository https://gitbox.apache.org/repos/asf/geode.git

commit 97acee90b9bbb201c02a3bcabb503cf05c3cfdb0
Author: Sean Goller <se...@goller.net>
AuthorDate: Wed Jul 24 03:53:28 2019 -0700

    Fix packer configuration for windows image. (#3837)
    
    * Docker needs to be started up, and needs to be elevated.
    * Made openjdk image pulled to be generic because they change
    * Added option to build script to add build arguments to packer call.
    
    (cherry picked from commit 6ee2e72114b0885a55746e898bd0210987d81143)
---
 .../google-windows-geode-builder/build_image.sh    |  3 ++-
 .../windows-packer.json                            | 24 ++++++++++++++--------
 2 files changed, 18 insertions(+), 9 deletions(-)

diff --git a/ci/images/google-windows-geode-builder/build_image.sh b/ci/images/google-windows-geode-builder/build_image.sh
index 8c6c3cd..c5dd1a4 100755
--- a/ci/images/google-windows-geode-builder/build_image.sh
+++ b/ci/images/google-windows-geode-builder/build_image.sh
@@ -18,6 +18,7 @@
 set -x
 
 PACKER=${PACKER:-packer135}
+PACKER_ARGS="${*}"
 INTERNAL=${INTERNAL:-true}
 SOURCE="${BASH_SOURCE[0]}"
 while [ -h "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symlink
@@ -56,7 +57,7 @@ fi
 HASHED_PIPELINE_PREFIX="i$(uuidgen -n @dns -s -N "${PIPELINE_PREFIX}")-"
 
 echo "Running packer"
-PACKER_LOG=1 ${PACKER} build \
+PACKER_LOG=1 ${PACKER} build ${PACKER_ARGS} \
   --var "geode_docker_image=${GEODE_DOCKER_IMAGE}" \
   --var "pipeline_prefix=${PIPELINE_PREFIX}" \
   --var "hashed_pipeline_prefix=${HASHED_PIPELINE_PREFIX}" \
diff --git a/ci/images/google-windows-geode-builder/windows-packer.json b/ci/images/google-windows-geode-builder/windows-packer.json
index 4c6ee0c..c364d3c 100644
--- a/ci/images/google-windows-geode-builder/windows-packer.json
+++ b/ci/images/google-windows-geode-builder/windows-packer.json
@@ -61,7 +61,6 @@
         "Move-Item \"C:\\java8tmp\" c:\\java8",
         "choco install -y openssh --version 7.7.2.1 /SSHServerFeature",
         "refreshenv",
-
         "$a = 10",
         "do {",
         "write-output \">>>>>>>>>> Installing rsync: $a attempts remaining <<<<<<<<<<\"",
@@ -69,7 +68,6 @@
         "$a--",
         "} while (-not (test-path C:\\ProgramData\\chocolatey\\bin\\rsync.exe) -and $a -gt 0)",
         "get-item C:\\ProgramData\\chocolatey\\bin\\rsync.exe",
-
         "winrm set winrm/config/service '@{AllowUnencrypted=\"true\"}'",
         "New-NetFirewallRule -DisplayName sshd -Direction inbound -Action allow -Protocol tcp -LocalPort 22",
         "New-NetFirewallRule -DisplayName \"Docker containers\" -LocalAddress 172.0.0.0/8 -Action allow -Direction inbound",
@@ -78,14 +76,24 @@
         "$NewPath = $OldPath + ';' + 'c:\\Program Files\\Git\\bin'",
         "Set-ItemProperty -Path 'Registry::HKEY_LOCAL_MACHINE\\System\\CurrentControlSet\\Control\\Session Manager\\Environment' -Name PATH -Value $NewPath",
         "write-output '>>>>>>>>>> Modify sshd config to comment use of administrators authorized key file <<<<<<<<<<'",
-        "(Get-Content \"C:\\Program Files\\OpenSSH-Win64\\sshd_config_default\") -replace '(Match Group administrators)', '#$1' -replace '(\\s*AuthorizedKeysFile.*)', '#$1' | Out-File \"C:\\Program Files\\OpenSSH-Win64\\sshd_config_default\" -encoding UTF8",
+        "(Get-Content \"C:\\Program Files\\OpenSSH-Win64\\sshd_config_default\") -replace '(Match Group administrators)', '#$1' -replace '(\\s*AuthorizedKeysFile.*)', '#$1' | Out-File \"C:\\Program Files\\OpenSSH-Win64\\sshd_config_default\" -encoding UTF8"
+      ]
+    },
+    {
+      "type":  "powershell",
+      "elevated_user": "geode",
+      "elevated_password": "{{.WinRMPassword}}",
+      "inline": [
+        "net start \"Docker Engine\"",
         "write-output '>>>>>>>>>> Adding openjdk docker image <<<<<<<<<<'",
-        "docker pull openjdk:8u212-jdk-windowsservercore-1809",
+        "docker pull openjdk:8",
         "write-output '>>>>>>>>>> Removing unused docker images <<<<<<<<<<'",
-        "docker rmi microsoft/windowsservercore:1809",
-        "docker rmi microsoft/nanoserver:1809",
-
-        "Set-Content -Path c:\\ProgramData\\docker\\config\\daemon.json -Value '{ \"hosts\": [\"tcp://0.0.0.0:2375\", \"npipe://\"] }'",
+        "Set-Content -Path c:\\ProgramData\\docker\\config\\daemon.json -Value '{ \"hosts\": [\"tcp://0.0.0.0:2375\", \"npipe://\"] }'"
+      ]
+    },
+    {
+      "type":  "powershell",
+      "inline": [
 
         "write-output '>>>>>>>>>> Cloning geode repo <<<<<<<<<<'",
         "& 'c:\\Program Files\\Git\\bin\\git.exe' clone -b develop --depth 1 https://github.com/apache/geode.git geode",


[geode] 01/21: Upgraded version number for releasing 1.9.1

Posted by on...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

onichols pushed a commit to branch release/1.9.1
in repository https://gitbox.apache.org/repos/asf/geode.git

commit 92b3eccf3d0a1d586d025d1d3f3a8b9993169bd1
Author: Owen Nichols <on...@pivotal.io>
AuthorDate: Tue Aug 20 11:50:06 2019 -0700

    Upgraded version number for releasing 1.9.1
---
 .../src/test/resources/expected-pom.xml            | 48 +++++++++++-----------
 .../src/test/resources/expected-pom.xml            |  6 +--
 ci/pipelines/geode-build/jinja.template.yml        |  2 +-
 ci/pipelines/meta/meta.properties                  |  2 +-
 ci/pipelines/shared/jinja.variables.yml            |  4 +-
 geode-assembly/src/test/resources/expected-pom.xml |  4 +-
 geode-book/config.yml                              |  6 +--
 geode-book/redirects.rb                            |  4 +-
 geode-common/src/test/resources/expected-pom.xml   |  4 +-
 .../src/test/resources/expected-pom.xml            |  4 +-
 .../src/test/resources/expected-pom.xml            |  4 +-
 .../java/org/apache/geode/internal/Version.java    | 10 ++++-
 .../cache/tier/sockets/CommandInitializer.java     |  1 +
 geode-core/src/test/resources/expected-pom.xml     |  4 +-
 geode-cq/src/test/resources/expected-pom.xml       |  4 +-
 geode-dunit/src/test/resources/expected-pom.xml    |  4 +-
 .../src/test/resources/expected-pom.xml            |  4 +-
 geode-junit/src/test/resources/expected-pom.xml    |  4 +-
 geode-lucene/src/test/resources/expected-pom.xml   |  4 +-
 .../src/test/resources/expected-pom.xml            |  4 +-
 .../src/test/resources/expected-pom.xml            |  4 +-
 .../src/test/resources/expected-pom.xml            |  4 +-
 geode-protobuf/src/test/resources/expected-pom.xml |  4 +-
 geode-pulse/src/test/resources/expected-pom.xml    |  4 +-
 .../src/test/resources/expected-pom.xml            |  4 +-
 geode-redis/src/test/resources/expected-pom.xml    |  4 +-
 geode-wan/src/test/resources/expected-pom.xml      |  4 +-
 geode-web-api/src/test/resources/expected-pom.xml  |  4 +-
 .../src/test/resources/expected-pom.xml            |  4 +-
 geode-web/src/test/resources/expected-pom.xml      |  4 +-
 gradle.properties                                  |  2 +-
 31 files changed, 88 insertions(+), 81 deletions(-)

diff --git a/boms/geode-all-bom/src/test/resources/expected-pom.xml b/boms/geode-all-bom/src/test/resources/expected-pom.xml
index bc7820b..1c2915f 100644
--- a/boms/geode-all-bom/src/test/resources/expected-pom.xml
+++ b/boms/geode-all-bom/src/test/resources/expected-pom.xml
@@ -19,7 +19,7 @@
   <modelVersion>4.0.0</modelVersion>
   <groupId>org.apache.geode</groupId>
   <artifactId>geode-all-bom</artifactId>
-  <version>1.9.0</version>
+  <version>1.9.1</version>
   <packaging>pom</packaging>
   <name>Apache Geode</name>
   <description>Apache Geode provides a database-like consistency model, reliable transaction processing and a shared-nothing architecture to maintain very low latency performance with high concurrency processing</description>
@@ -635,117 +635,117 @@
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-common</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
       </dependency>
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-concurrency-test</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
       </dependency>
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-connectors</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
       </dependency>
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-core</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
       </dependency>
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-cq</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
       </dependency>
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-dunit</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
       </dependency>
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-experimental-driver</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
       </dependency>
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-junit</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
       </dependency>
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-lucene</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
       </dependency>
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-management</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
       </dependency>
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-old-client-support</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
       </dependency>
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-protobuf</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
       </dependency>
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-protobuf-messages</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
       </dependency>
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-pulse</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
       </dependency>
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-rebalancer</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
       </dependency>
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-redis</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
       </dependency>
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-wan</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
       </dependency>
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-web</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
       </dependency>
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-web-api</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
       </dependency>
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-web-management</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
       </dependency>
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-lucene-test</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
       </dependency>
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-protobuf-test</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
       </dependency>
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-pulse-test</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
       </dependency>
     </dependencies>
   </dependencyManagement>
diff --git a/boms/geode-client-bom/src/test/resources/expected-pom.xml b/boms/geode-client-bom/src/test/resources/expected-pom.xml
index 2083af7..fe25698 100644
--- a/boms/geode-client-bom/src/test/resources/expected-pom.xml
+++ b/boms/geode-client-bom/src/test/resources/expected-pom.xml
@@ -19,7 +19,7 @@
   <modelVersion>4.0.0</modelVersion>
   <groupId>org.apache.geode</groupId>
   <artifactId>geode-client-bom</artifactId>
-  <version>1.9.0</version>
+  <version>1.9.1</version>
   <packaging>pom</packaging>
   <name>Apache Geode</name>
   <description>Apache Geode provides a database-like consistency model, reliable transaction processing and a shared-nothing architecture to maintain very low latency performance with high concurrency processing</description>
@@ -40,7 +40,7 @@
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-core</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
         <exclusions>
           <exclusion>
             <groupId>org.apache.shiro</groupId>
@@ -55,7 +55,7 @@
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-cq</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
       </dependency>
     </dependencies>
   </dependencyManagement>
diff --git a/ci/pipelines/geode-build/jinja.template.yml b/ci/pipelines/geode-build/jinja.template.yml
index a248114..dfb190f 100644
--- a/ci/pipelines/geode-build/jinja.template.yml
+++ b/ci/pipelines/geode-build/jinja.template.yml
@@ -184,7 +184,7 @@ resources:
   source:
     bucket: ((version-bucket))
     driver: gcs
-    initial_version: 1.9.0
+    initial_version: 1.9.1
     json_key: ((!concourse-gcp-key))
     key: ((pipeline-prefix))((geode-build-branch))/version
 - name: daily
diff --git a/ci/pipelines/meta/meta.properties b/ci/pipelines/meta/meta.properties
index 69c9e9e..7f1ea09 100644
--- a/ci/pipelines/meta/meta.properties
+++ b/ci/pipelines/meta/meta.properties
@@ -24,4 +24,4 @@ PUBLIC=true
 REPOSITORY_PUBLIC=true
 GRADLE_GLOBAL_ARGS=""
 MAVEN_SNAPSHOT_BUCKET=gcs://maven.apachegeode-ci.info/snapshots/
-SEMVER_PRERELEASE_TOKEN=SNAPSHOT
+SEMVER_PRERELEASE_TOKEN=""
diff --git a/ci/pipelines/shared/jinja.variables.yml b/ci/pipelines/shared/jinja.variables.yml
index 3a00e4e..e3650a0 100644
--- a/ci/pipelines/shared/jinja.variables.yml
+++ b/ci/pipelines/shared/jinja.variables.yml
@@ -57,9 +57,9 @@ java_test_versions:
   version: 11
 
 benchmarks:
-  branch: "release/1.9.0"
+  branch: "develop"
   baseline_branch: ""
-  baseline_version: "1.8.0"
+  baseline_version: "1.9.0"
 
 java_build_version:
   name: OpenJDK8
diff --git a/geode-assembly/src/test/resources/expected-pom.xml b/geode-assembly/src/test/resources/expected-pom.xml
index 2ca5b5d..3e77235 100644
--- a/geode-assembly/src/test/resources/expected-pom.xml
+++ b/geode-assembly/src/test/resources/expected-pom.xml
@@ -19,7 +19,7 @@
   <modelVersion>4.0.0</modelVersion>
   <groupId>org.apache.geode</groupId>
   <artifactId>apache-geode</artifactId>
-  <version>1.9.0</version>
+  <version>1.9.1</version>
   <packaging>tgz</packaging>
   <name>Apache Geode</name>
   <description>Apache Geode provides a database-like consistency model, reliable transaction processing and a shared-nothing architecture to maintain very low latency performance with high concurrency processing</description>
@@ -40,7 +40,7 @@
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-all-bom</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
         <type>pom</type>
         <scope>import</scope>
       </dependency>
diff --git a/geode-book/config.yml b/geode-book/config.yml
index c4d2b24..97f60d0 100644
--- a/geode-book/config.yml
+++ b/geode-book/config.yml
@@ -21,14 +21,14 @@ public_host: localhost
 sections:
 - repository:
     name: geode-docs
-  directory: docs/guide/19
+  directory: docs/guide/191
   subnav_template: geode-subnav
 
 template_variables:
   product_name_long: Apache Geode
   product_name: Geode
-  product_version: 1.9
-  product_version_nodot: 19
+  product_version: 1.9.1
+  product_version_nodot: 191
   min_java_update: 121
   support_url: http://geode.apache.org/community
   product_url: http://geode.apache.org/
diff --git a/geode-book/redirects.rb b/geode-book/redirects.rb
index 1585fe9..a70986a 100644
--- a/geode-book/redirects.rb
+++ b/geode-book/redirects.rb
@@ -14,5 +14,5 @@
 #permissions and limitations under the License.
 
 r301 %r{/releases/latest/javadoc/(.*)}, 'http://geode.apache.org/releases/latest/javadoc/$1'
-rewrite '/', '/docs/guide/19/about_geode.html'
-rewrite '/index.html', '/docs/guide/19/about_geode.html'
+rewrite '/', '/docs/guide/191/about_geode.html'
+rewrite '/index.html', '/docs/guide/191/about_geode.html'
diff --git a/geode-common/src/test/resources/expected-pom.xml b/geode-common/src/test/resources/expected-pom.xml
index adbc0df..e6fd6f3 100644
--- a/geode-common/src/test/resources/expected-pom.xml
+++ b/geode-common/src/test/resources/expected-pom.xml
@@ -19,7 +19,7 @@
   <modelVersion>4.0.0</modelVersion>
   <groupId>org.apache.geode</groupId>
   <artifactId>geode-common</artifactId>
-  <version>1.9.0</version>
+  <version>1.9.1</version>
   <name>Apache Geode</name>
   <description>Apache Geode provides a database-like consistency model, reliable transaction processing and a shared-nothing architecture to maintain very low latency performance with high concurrency processing</description>
   <url>http://geode.apache.org</url>
@@ -39,7 +39,7 @@
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-all-bom</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
         <type>pom</type>
         <scope>import</scope>
       </dependency>
diff --git a/geode-concurrency-test/src/test/resources/expected-pom.xml b/geode-concurrency-test/src/test/resources/expected-pom.xml
index c7728b6..82c1108 100644
--- a/geode-concurrency-test/src/test/resources/expected-pom.xml
+++ b/geode-concurrency-test/src/test/resources/expected-pom.xml
@@ -19,7 +19,7 @@
   <modelVersion>4.0.0</modelVersion>
   <groupId>org.apache.geode</groupId>
   <artifactId>geode-concurrency-test</artifactId>
-  <version>1.9.0</version>
+  <version>1.9.1</version>
   <name>Apache Geode</name>
   <description>Apache Geode provides a database-like consistency model, reliable transaction processing and a shared-nothing architecture to maintain very low latency performance with high concurrency processing</description>
   <url>http://geode.apache.org</url>
@@ -39,7 +39,7 @@
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-all-bom</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
         <type>pom</type>
         <scope>import</scope>
       </dependency>
diff --git a/geode-connectors/src/test/resources/expected-pom.xml b/geode-connectors/src/test/resources/expected-pom.xml
index eb4c983..6f48d35 100644
--- a/geode-connectors/src/test/resources/expected-pom.xml
+++ b/geode-connectors/src/test/resources/expected-pom.xml
@@ -19,7 +19,7 @@
   <modelVersion>4.0.0</modelVersion>
   <groupId>org.apache.geode</groupId>
   <artifactId>geode-connectors</artifactId>
-  <version>1.9.0</version>
+  <version>1.9.1</version>
   <name>Apache Geode</name>
   <description>Apache Geode provides a database-like consistency model, reliable transaction processing and a shared-nothing architecture to maintain very low latency performance with high concurrency processing</description>
   <url>http://geode.apache.org</url>
@@ -39,7 +39,7 @@
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-all-bom</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
         <type>pom</type>
         <scope>import</scope>
       </dependency>
diff --git a/geode-core/src/main/java/org/apache/geode/internal/Version.java b/geode-core/src/main/java/org/apache/geode/internal/Version.java
index 29492dc..a4a9a1d 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/Version.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/Version.java
@@ -57,7 +57,7 @@ public class Version implements Comparable<Version> {
   /** byte used as ordinal to represent this <code>Version</code> */
   private final short ordinal;
 
-  public static final int HIGHEST_VERSION = 100;
+  public static final int HIGHEST_VERSION = 101;
 
   @Immutable
   private static final Version[] VALUES = new Version[HIGHEST_VERSION + 1];
@@ -263,6 +263,12 @@ public class Version implements Comparable<Version> {
   public static final Version GEODE_1_9_0 =
       new Version("GEODE", "1.9.0", (byte) 1, (byte) 9, (byte) 0, (byte) 0, GEODE_1_9_0_ORDINAL);
 
+  private static final byte GEODE_1_9_1_ORDINAL = 101;
+
+  @Immutable
+  public static final Version GEODE_1_9_1 =
+      new Version("GEODE", "1.9.1", (byte) 1, (byte) 9, (byte) 1, (byte) 0, GEODE_1_9_1_ORDINAL);
+
   /* NOTE: when adding a new version bump the ordinal by 5. Ordinals can be short ints */
 
   /**
@@ -270,7 +276,7 @@ public class Version implements Comparable<Version> {
    * HIGHEST_VERSION when changing CURRENT !!!
    */
   @Immutable
-  public static final Version CURRENT = GEODE_1_9_0;
+  public static final Version CURRENT = GEODE_1_9_1;
 
   /**
    * A lot of versioning code needs access to the current version's ordinal
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/CommandInitializer.java b/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/CommandInitializer.java
index fd4b784..0f8b6ab 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/CommandInitializer.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/CommandInitializer.java
@@ -337,6 +337,7 @@ public class CommandInitializer {
         ExecuteRegionFunctionGeode18.getCommand());
     allCommands.put(Version.GEODE_1_8_0, geode18Commands);
     allCommands.put(Version.GEODE_1_9_0, geode18Commands);
+    allCommands.put(Version.GEODE_1_9_1, geode18Commands);
 
     return Collections.unmodifiableMap(allCommands);
   }
diff --git a/geode-core/src/test/resources/expected-pom.xml b/geode-core/src/test/resources/expected-pom.xml
index 9213f09..604bbe5 100644
--- a/geode-core/src/test/resources/expected-pom.xml
+++ b/geode-core/src/test/resources/expected-pom.xml
@@ -19,7 +19,7 @@
   <modelVersion>4.0.0</modelVersion>
   <groupId>org.apache.geode</groupId>
   <artifactId>geode-core</artifactId>
-  <version>1.9.0</version>
+  <version>1.9.1</version>
   <name>Apache Geode</name>
   <description>Apache Geode provides a database-like consistency model, reliable transaction processing and a shared-nothing architecture to maintain very low latency performance with high concurrency processing</description>
   <url>http://geode.apache.org</url>
@@ -39,7 +39,7 @@
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-all-bom</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
         <type>pom</type>
         <scope>import</scope>
       </dependency>
diff --git a/geode-cq/src/test/resources/expected-pom.xml b/geode-cq/src/test/resources/expected-pom.xml
index c021503..a10c294 100644
--- a/geode-cq/src/test/resources/expected-pom.xml
+++ b/geode-cq/src/test/resources/expected-pom.xml
@@ -19,7 +19,7 @@
   <modelVersion>4.0.0</modelVersion>
   <groupId>org.apache.geode</groupId>
   <artifactId>geode-cq</artifactId>
-  <version>1.9.0</version>
+  <version>1.9.1</version>
   <name>Apache Geode</name>
   <description>Apache Geode provides a database-like consistency model, reliable transaction processing and a shared-nothing architecture to maintain very low latency performance with high concurrency processing</description>
   <url>http://geode.apache.org</url>
@@ -39,7 +39,7 @@
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-all-bom</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
         <type>pom</type>
         <scope>import</scope>
       </dependency>
diff --git a/geode-dunit/src/test/resources/expected-pom.xml b/geode-dunit/src/test/resources/expected-pom.xml
index 667b86f..a3b0543 100644
--- a/geode-dunit/src/test/resources/expected-pom.xml
+++ b/geode-dunit/src/test/resources/expected-pom.xml
@@ -19,7 +19,7 @@
   <modelVersion>4.0.0</modelVersion>
   <groupId>org.apache.geode</groupId>
   <artifactId>geode-dunit</artifactId>
-  <version>1.9.0</version>
+  <version>1.9.1</version>
   <name>Apache Geode</name>
   <description>Apache Geode provides a database-like consistency model, reliable transaction processing and a shared-nothing architecture to maintain very low latency performance with high concurrency processing</description>
   <url>http://geode.apache.org</url>
@@ -39,7 +39,7 @@
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-all-bom</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
         <type>pom</type>
         <scope>import</scope>
       </dependency>
diff --git a/geode-experimental-driver/src/test/resources/expected-pom.xml b/geode-experimental-driver/src/test/resources/expected-pom.xml
index 68c4bdc..b597504 100644
--- a/geode-experimental-driver/src/test/resources/expected-pom.xml
+++ b/geode-experimental-driver/src/test/resources/expected-pom.xml
@@ -19,7 +19,7 @@
   <modelVersion>4.0.0</modelVersion>
   <groupId>org.apache.geode</groupId>
   <artifactId>geode-experimental-driver</artifactId>
-  <version>1.9.0</version>
+  <version>1.9.1</version>
   <name>Apache Geode</name>
   <description>Apache Geode provides a database-like consistency model, reliable transaction processing and a shared-nothing architecture to maintain very low latency performance with high concurrency processing</description>
   <url>http://geode.apache.org</url>
@@ -39,7 +39,7 @@
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-all-bom</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
         <type>pom</type>
         <scope>import</scope>
       </dependency>
diff --git a/geode-junit/src/test/resources/expected-pom.xml b/geode-junit/src/test/resources/expected-pom.xml
index 3e29b30..02f9106 100644
--- a/geode-junit/src/test/resources/expected-pom.xml
+++ b/geode-junit/src/test/resources/expected-pom.xml
@@ -19,7 +19,7 @@
   <modelVersion>4.0.0</modelVersion>
   <groupId>org.apache.geode</groupId>
   <artifactId>geode-junit</artifactId>
-  <version>1.9.0</version>
+  <version>1.9.1</version>
   <name>Apache Geode</name>
   <description>Apache Geode provides a database-like consistency model, reliable transaction processing and a shared-nothing architecture to maintain very low latency performance with high concurrency processing</description>
   <url>http://geode.apache.org</url>
@@ -39,7 +39,7 @@
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-all-bom</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
         <type>pom</type>
         <scope>import</scope>
       </dependency>
diff --git a/geode-lucene/src/test/resources/expected-pom.xml b/geode-lucene/src/test/resources/expected-pom.xml
index 6741b02..70059a6 100644
--- a/geode-lucene/src/test/resources/expected-pom.xml
+++ b/geode-lucene/src/test/resources/expected-pom.xml
@@ -19,7 +19,7 @@
   <modelVersion>4.0.0</modelVersion>
   <groupId>org.apache.geode</groupId>
   <artifactId>geode-lucene</artifactId>
-  <version>1.9.0</version>
+  <version>1.9.1</version>
   <name>Apache Geode</name>
   <description>Apache Geode provides a database-like consistency model, reliable transaction processing and a shared-nothing architecture to maintain very low latency performance with high concurrency processing</description>
   <url>http://geode.apache.org</url>
@@ -39,7 +39,7 @@
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-all-bom</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
         <type>pom</type>
         <scope>import</scope>
       </dependency>
diff --git a/geode-management/src/test/resources/expected-pom.xml b/geode-management/src/test/resources/expected-pom.xml
index eb82627..8ada5ae 100644
--- a/geode-management/src/test/resources/expected-pom.xml
+++ b/geode-management/src/test/resources/expected-pom.xml
@@ -19,7 +19,7 @@
   <modelVersion>4.0.0</modelVersion>
   <groupId>org.apache.geode</groupId>
   <artifactId>geode-management</artifactId>
-  <version>1.9.0</version>
+  <version>1.9.1</version>
   <name>Apache Geode</name>
   <description>Apache Geode provides a database-like consistency model, reliable transaction processing and a shared-nothing architecture to maintain very low latency performance with high concurrency processing</description>
   <url>http://geode.apache.org</url>
@@ -39,7 +39,7 @@
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-all-bom</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
         <type>pom</type>
         <scope>import</scope>
       </dependency>
diff --git a/geode-old-client-support/src/test/resources/expected-pom.xml b/geode-old-client-support/src/test/resources/expected-pom.xml
index 65a4bd9..ccfd268 100644
--- a/geode-old-client-support/src/test/resources/expected-pom.xml
+++ b/geode-old-client-support/src/test/resources/expected-pom.xml
@@ -19,7 +19,7 @@
   <modelVersion>4.0.0</modelVersion>
   <groupId>org.apache.geode</groupId>
   <artifactId>geode-old-client-support</artifactId>
-  <version>1.9.0</version>
+  <version>1.9.1</version>
   <name>Apache Geode</name>
   <description>Apache Geode provides a database-like consistency model, reliable transaction processing and a shared-nothing architecture to maintain very low latency performance with high concurrency processing</description>
   <url>http://geode.apache.org</url>
@@ -39,7 +39,7 @@
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-all-bom</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
         <type>pom</type>
         <scope>import</scope>
       </dependency>
diff --git a/geode-protobuf-messages/src/test/resources/expected-pom.xml b/geode-protobuf-messages/src/test/resources/expected-pom.xml
index 6855ddd..8484355 100644
--- a/geode-protobuf-messages/src/test/resources/expected-pom.xml
+++ b/geode-protobuf-messages/src/test/resources/expected-pom.xml
@@ -19,7 +19,7 @@
   <modelVersion>4.0.0</modelVersion>
   <groupId>org.apache.geode</groupId>
   <artifactId>geode-protobuf-messages</artifactId>
-  <version>1.9.0</version>
+  <version>1.9.1</version>
   <name>Apache Geode</name>
   <description>Apache Geode provides a database-like consistency model, reliable transaction processing and a shared-nothing architecture to maintain very low latency performance with high concurrency processing</description>
   <url>http://geode.apache.org</url>
@@ -39,7 +39,7 @@
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-all-bom</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
         <type>pom</type>
         <scope>import</scope>
       </dependency>
diff --git a/geode-protobuf/src/test/resources/expected-pom.xml b/geode-protobuf/src/test/resources/expected-pom.xml
index 7c8b258..4f99494 100644
--- a/geode-protobuf/src/test/resources/expected-pom.xml
+++ b/geode-protobuf/src/test/resources/expected-pom.xml
@@ -19,7 +19,7 @@
   <modelVersion>4.0.0</modelVersion>
   <groupId>org.apache.geode</groupId>
   <artifactId>geode-protobuf</artifactId>
-  <version>1.9.0</version>
+  <version>1.9.1</version>
   <name>Apache Geode</name>
   <description>Apache Geode provides a database-like consistency model, reliable transaction processing and a shared-nothing architecture to maintain very low latency performance with high concurrency processing</description>
   <url>http://geode.apache.org</url>
@@ -39,7 +39,7 @@
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-all-bom</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
         <type>pom</type>
         <scope>import</scope>
       </dependency>
diff --git a/geode-pulse/src/test/resources/expected-pom.xml b/geode-pulse/src/test/resources/expected-pom.xml
index 3f7d7f7..bed85b2 100644
--- a/geode-pulse/src/test/resources/expected-pom.xml
+++ b/geode-pulse/src/test/resources/expected-pom.xml
@@ -19,7 +19,7 @@
   <modelVersion>4.0.0</modelVersion>
   <groupId>org.apache.geode</groupId>
   <artifactId>geode-pulse</artifactId>
-  <version>1.9.0</version>
+  <version>1.9.1</version>
   <name>Apache Geode</name>
   <description>Apache Geode provides a database-like consistency model, reliable transaction processing and a shared-nothing architecture to maintain very low latency performance with high concurrency processing</description>
   <url>http://geode.apache.org</url>
@@ -39,7 +39,7 @@
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-all-bom</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
         <type>pom</type>
         <scope>import</scope>
       </dependency>
diff --git a/geode-rebalancer/src/test/resources/expected-pom.xml b/geode-rebalancer/src/test/resources/expected-pom.xml
index 79cdae5..a7891f8 100644
--- a/geode-rebalancer/src/test/resources/expected-pom.xml
+++ b/geode-rebalancer/src/test/resources/expected-pom.xml
@@ -19,7 +19,7 @@
   <modelVersion>4.0.0</modelVersion>
   <groupId>org.apache.geode</groupId>
   <artifactId>geode-rebalancer</artifactId>
-  <version>1.9.0</version>
+  <version>1.9.1</version>
   <name>Apache Geode</name>
   <description>Apache Geode provides a database-like consistency model, reliable transaction processing and a shared-nothing architecture to maintain very low latency performance with high concurrency processing</description>
   <url>http://geode.apache.org</url>
@@ -39,7 +39,7 @@
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-all-bom</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
         <type>pom</type>
         <scope>import</scope>
       </dependency>
diff --git a/geode-redis/src/test/resources/expected-pom.xml b/geode-redis/src/test/resources/expected-pom.xml
index 65b4d67..3590c8a 100644
--- a/geode-redis/src/test/resources/expected-pom.xml
+++ b/geode-redis/src/test/resources/expected-pom.xml
@@ -19,7 +19,7 @@
   <modelVersion>4.0.0</modelVersion>
   <groupId>org.apache.geode</groupId>
   <artifactId>geode-redis</artifactId>
-  <version>1.9.0</version>
+  <version>1.9.1</version>
   <name>Apache Geode</name>
   <description>Apache Geode provides a database-like consistency model, reliable transaction processing and a shared-nothing architecture to maintain very low latency performance with high concurrency processing</description>
   <url>http://geode.apache.org</url>
@@ -39,7 +39,7 @@
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-all-bom</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
         <type>pom</type>
         <scope>import</scope>
       </dependency>
diff --git a/geode-wan/src/test/resources/expected-pom.xml b/geode-wan/src/test/resources/expected-pom.xml
index 5169e35..b2828b2 100644
--- a/geode-wan/src/test/resources/expected-pom.xml
+++ b/geode-wan/src/test/resources/expected-pom.xml
@@ -19,7 +19,7 @@
   <modelVersion>4.0.0</modelVersion>
   <groupId>org.apache.geode</groupId>
   <artifactId>geode-wan</artifactId>
-  <version>1.9.0</version>
+  <version>1.9.1</version>
   <name>Apache Geode</name>
   <description>Apache Geode provides a database-like consistency model, reliable transaction processing and a shared-nothing architecture to maintain very low latency performance with high concurrency processing</description>
   <url>http://geode.apache.org</url>
@@ -39,7 +39,7 @@
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-all-bom</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
         <type>pom</type>
         <scope>import</scope>
       </dependency>
diff --git a/geode-web-api/src/test/resources/expected-pom.xml b/geode-web-api/src/test/resources/expected-pom.xml
index da31d54..206ef39 100644
--- a/geode-web-api/src/test/resources/expected-pom.xml
+++ b/geode-web-api/src/test/resources/expected-pom.xml
@@ -19,7 +19,7 @@
   <modelVersion>4.0.0</modelVersion>
   <groupId>org.apache.geode</groupId>
   <artifactId>geode-web-api</artifactId>
-  <version>1.9.0</version>
+  <version>1.9.1</version>
   <name>Apache Geode</name>
   <description>Apache Geode provides a database-like consistency model, reliable transaction processing and a shared-nothing architecture to maintain very low latency performance with high concurrency processing</description>
   <url>http://geode.apache.org</url>
@@ -39,7 +39,7 @@
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-all-bom</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
         <type>pom</type>
         <scope>import</scope>
       </dependency>
diff --git a/geode-web-management/src/test/resources/expected-pom.xml b/geode-web-management/src/test/resources/expected-pom.xml
index e4d0ad8..1e221cd 100644
--- a/geode-web-management/src/test/resources/expected-pom.xml
+++ b/geode-web-management/src/test/resources/expected-pom.xml
@@ -19,7 +19,7 @@
   <modelVersion>4.0.0</modelVersion>
   <groupId>org.apache.geode</groupId>
   <artifactId>geode-web-management</artifactId>
-  <version>1.9.0</version>
+  <version>1.9.1</version>
   <name>Apache Geode</name>
   <description>Apache Geode provides a database-like consistency model, reliable transaction processing and a shared-nothing architecture to maintain very low latency performance with high concurrency processing</description>
   <url>http://geode.apache.org</url>
@@ -39,7 +39,7 @@
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-all-bom</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
         <type>pom</type>
         <scope>import</scope>
       </dependency>
diff --git a/geode-web/src/test/resources/expected-pom.xml b/geode-web/src/test/resources/expected-pom.xml
index 71e7210..0b69043 100644
--- a/geode-web/src/test/resources/expected-pom.xml
+++ b/geode-web/src/test/resources/expected-pom.xml
@@ -19,7 +19,7 @@
   <modelVersion>4.0.0</modelVersion>
   <groupId>org.apache.geode</groupId>
   <artifactId>geode-web</artifactId>
-  <version>1.9.0</version>
+  <version>1.9.1</version>
   <name>Apache Geode</name>
   <description>Apache Geode provides a database-like consistency model, reliable transaction processing and a shared-nothing architecture to maintain very low latency performance with high concurrency processing</description>
   <url>http://geode.apache.org</url>
@@ -39,7 +39,7 @@
       <dependency>
         <groupId>org.apache.geode</groupId>
         <artifactId>geode-all-bom</artifactId>
-        <version>1.9.0</version>
+        <version>1.9.1</version>
         <type>pom</type>
         <scope>import</scope>
       </dependency>
diff --git a/gradle.properties b/gradle.properties
index 66f55ef..50c5d5f 100755
--- a/gradle.properties
+++ b/gradle.properties
@@ -28,7 +28,7 @@
 #   <blank>   - release
 #
 # The full version string consists of 'versionNumber + releaseQualifier + releaseType'
-version = 1.9.0
+version = 1.9.1
 
 # Default Maven targets
 mavenSnapshotUrl = gcs://maven.apachegeode-ci.info/snapshots


[geode] 10/21: GEODE-7050: Use Log4jAgent only if Log4j is using Log4jProvider (#3892)

Posted by on...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

onichols pushed a commit to branch release/1.9.1
in repository https://gitbox.apache.org/repos/asf/geode.git

commit bf4703072f934b3f5fecfcd5ad4cd46fec584022
Author: Kirk Lund <kl...@apache.org>
AuthorDate: Thu Aug 8 18:17:32 2019 -0700

    GEODE-7050: Use Log4jAgent only if Log4j is using Log4jProvider (#3892)
    
    * GEODE-7050: Use Log4jAgent only if Log4j is using Log4jProvider
    
    This change prevents Geode from using Log4jAgent if Log4j Core is
    present but not using Log4jProvider.
    
    For example, Log4j Core uses SLF4JProvider when log4j-to-slf4j is in
    the classpath.
    
    By disabling Log4jAgent when other Log4j Providers are in use, this
    prevents problems such as ClassCastExceptions when attemping to cast
    loggers from org.apache.logging.slf4j.SLF4JLogger to
    org.apache.logging.log4j.core.Logger to get the LoggerConfig or
    LoggerContext.
    
    * Update geode-core/src/test/java/org/apache/geode/internal/logging/DefaultProviderCheckerTest.java
    
    Co-Authored-By: Aaron Lindsey <al...@pivotal.io>
    (cherry picked from commit e5c9c420f462149fd062847904e3435fbe99afb4)
---
 .../internal/logging/DefaultProviderChecker.java   |  82 +++++++++++
 .../internal/logging/ProviderAgentLoader.java      |  26 +---
 .../logging/DefaultProviderCheckerTest.java        | 152 +++++++++++++++++++++
 3 files changed, 236 insertions(+), 24 deletions(-)

diff --git a/geode-core/src/main/java/org/apache/geode/internal/logging/DefaultProviderChecker.java b/geode-core/src/main/java/org/apache/geode/internal/logging/DefaultProviderChecker.java
new file mode 100644
index 0000000..fae6390
--- /dev/null
+++ b/geode-core/src/main/java/org/apache/geode/internal/logging/DefaultProviderChecker.java
@@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.internal.logging;
+
+import java.util.function.Function;
+import java.util.function.Supplier;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.apache.logging.log4j.status.StatusLogger;
+
+import org.apache.geode.annotations.VisibleForTesting;
+import org.apache.geode.internal.ClassPathLoader;
+import org.apache.geode.internal.logging.ProviderAgentLoader.AvailabilityChecker;
+
+class DefaultProviderChecker implements AvailabilityChecker {
+
+  /**
+   * The default {@code ProviderAgent} is {@code Log4jAgent}.
+   */
+  static final String DEFAULT_PROVIDER_AGENT_NAME =
+      "org.apache.geode.internal.logging.log4j.Log4jAgent";
+
+  static final String DEFAULT_PROVIDER_CLASS_NAME =
+      "org.apache.logging.log4j.core.impl.Log4jContextFactory";
+
+  private final Supplier<Class> contextFactoryClassSupplier;
+  private final Function<String, Boolean> isClassLoadableFunction;
+  private final Logger logger;
+
+  DefaultProviderChecker() {
+    this(() -> LogManager.getFactory().getClass(), DefaultProviderChecker::isClassLoadable,
+        StatusLogger.getLogger());
+  }
+
+  @VisibleForTesting
+  DefaultProviderChecker(Supplier<Class> contextFactoryClassSupplier,
+      Function<String, Boolean> isClassLoadableFunction,
+      Logger logger) {
+    this.contextFactoryClassSupplier = contextFactoryClassSupplier;
+    this.isClassLoadableFunction = isClassLoadableFunction;
+    this.logger = logger;
+  }
+
+  @Override
+  public boolean isAvailable() {
+    if (!isClassLoadableFunction.apply(DEFAULT_PROVIDER_CLASS_NAME)) {
+      logger.info("Unable to find Log4j Core.");
+      return false;
+    }
+
+    boolean usingLog4jProvider =
+        DEFAULT_PROVIDER_CLASS_NAME.equals(contextFactoryClassSupplier.get().getName());
+    String message = "Log4j Core is available "
+        + (usingLog4jProvider ? "and using" : "but not using") + " Log4jProvider.";
+    logger.info(message);
+    return usingLog4jProvider;
+  }
+
+  @VisibleForTesting
+  static boolean isClassLoadable(String className) {
+    try {
+      ClassPathLoader.getLatest().forName(className);
+      return true;
+    } catch (ClassNotFoundException e) {
+      return false;
+    }
+  }
+
+}
diff --git a/geode-core/src/main/java/org/apache/geode/internal/logging/ProviderAgentLoader.java b/geode-core/src/main/java/org/apache/geode/internal/logging/ProviderAgentLoader.java
index 5d6dd98..0671f3b 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/logging/ProviderAgentLoader.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/logging/ProviderAgentLoader.java
@@ -15,7 +15,7 @@
 package org.apache.geode.internal.logging;
 
 import static org.apache.geode.internal.lang.SystemPropertyHelper.GEODE_PREFIX;
-import static org.apache.geode.internal.logging.ProviderAgentLoader.DefaultProvider.DEFAULT_PROVIDER_AGENT_NAME;
+import static org.apache.geode.internal.logging.DefaultProviderChecker.DEFAULT_PROVIDER_AGENT_NAME;
 
 import java.util.ServiceLoader;
 
@@ -51,7 +51,7 @@ public class ProviderAgentLoader {
   private final AvailabilityChecker availabilityChecker;
 
   public ProviderAgentLoader() {
-    this(new DefaultProvider());
+    this(new DefaultProviderChecker());
   }
 
   @VisibleForTesting
@@ -132,26 +132,4 @@ public class ProviderAgentLoader {
     boolean isAvailable();
   }
 
-  static class DefaultProvider implements AvailabilityChecker {
-
-    /**
-     * The default {@code ProviderAgent} is {@code Log4jAgent}.
-     */
-    static final String DEFAULT_PROVIDER_AGENT_NAME =
-        "org.apache.geode.internal.logging.log4j.Log4jAgent";
-
-    static final String DEFAULT_PROVIDER_CLASS_NAME = "org.apache.logging.log4j.core.Logger";
-
-    @Override
-    public boolean isAvailable() {
-      try {
-        ClassPathLoader.getLatest().forName(DEFAULT_PROVIDER_CLASS_NAME);
-        LOGGER.info("Log4j Core is available");
-        return true;
-      } catch (ClassNotFoundException | ClassCastException e) {
-        LOGGER.info("Unable to find Log4j Core");
-      }
-      return false;
-    }
-  }
 }
diff --git a/geode-core/src/test/java/org/apache/geode/internal/logging/DefaultProviderCheckerTest.java b/geode-core/src/test/java/org/apache/geode/internal/logging/DefaultProviderCheckerTest.java
new file mode 100644
index 0000000..ac16278
--- /dev/null
+++ b/geode-core/src/test/java/org/apache/geode/internal/logging/DefaultProviderCheckerTest.java
@@ -0,0 +1,152 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.internal.logging;
+
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.catchThrowable;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.verify;
+
+import org.apache.logging.log4j.Logger;
+import org.apache.logging.log4j.core.impl.Log4jContextFactory;
+import org.apache.logging.log4j.spi.LoggerContextFactory;
+import org.junit.Test;
+import org.mockito.ArgumentCaptor;
+
+import org.apache.geode.internal.ClassPathLoader;
+
+public class DefaultProviderCheckerTest {
+
+  @Test
+  public void isAvailableReturnsTrueIfAbleToLoadDefaultProviderClass() {
+    DefaultProviderChecker checker = new DefaultProviderChecker(() -> Log4jContextFactory.class,
+        (a) -> true, mock(Logger.class));
+
+    boolean value = checker.isAvailable();
+
+    assertThat(value).isTrue();
+  }
+
+  @Test
+  public void isAvailableReturnsFalseIfUnableToLoadDefaultProviderClass() {
+    DefaultProviderChecker checker = new DefaultProviderChecker(() -> Log4jContextFactory.class,
+        (a) -> false, mock(Logger.class));
+
+    boolean value = checker.isAvailable();
+
+    assertThat(value).isFalse();
+  }
+
+  @Test
+  public void isAvailableReturnsFalseIfNotUsingLog4jProvider() {
+    DefaultProviderChecker checker = new DefaultProviderChecker(
+        () -> mock(LoggerContextFactory.class).getClass(), (a) -> true, mock(Logger.class));
+
+    boolean value = checker.isAvailable();
+
+    assertThat(value).isFalse();
+  }
+
+  @Test
+  public void logsUsingMessageIfUsingLog4jProvider() {
+    Logger logger = mock(Logger.class);
+    DefaultProviderChecker checker =
+        new DefaultProviderChecker(() -> Log4jContextFactory.class, (a) -> true, logger);
+
+    boolean value = checker.isAvailable();
+
+    assertThat(value).isTrue();
+
+    ArgumentCaptor<String> loggedMessage = ArgumentCaptor.forClass(String.class);
+    verify(logger).info(loggedMessage.capture());
+
+    assertThat(loggedMessage.getValue())
+        .isEqualTo("Log4j Core is available and using Log4jProvider.");
+  }
+
+  @Test
+  public void logsNotUsingMessageIfNotUsingLog4jProvider() {
+    Logger logger = mock(Logger.class);
+    DefaultProviderChecker checker = new DefaultProviderChecker(
+        () -> mock(LoggerContextFactory.class).getClass(), (a) -> true, logger);
+
+    boolean value = checker.isAvailable();
+
+    assertThat(value).isFalse();
+
+    ArgumentCaptor<String> loggedMessage = ArgumentCaptor.forClass(String.class);
+    verify(logger).info(loggedMessage.capture());
+
+    assertThat(loggedMessage.getValue())
+        .isEqualTo("Log4j Core is available but not using Log4jProvider.");
+  }
+
+  @Test
+  public void logsUnableToFindMessageIfClassNotFoundExceptionIsCaught() {
+    Logger logger = mock(Logger.class);
+    DefaultProviderChecker checker =
+        new DefaultProviderChecker(() -> Log4jContextFactory.class, (a) -> false, logger);
+
+    boolean value = checker.isAvailable();
+
+    assertThat(value).isFalse();
+
+    ArgumentCaptor<String> loggedMessage = ArgumentCaptor.forClass(String.class);
+    verify(logger).info(loggedMessage.capture());
+
+    assertThat(loggedMessage.getValue()).isEqualTo("Unable to find Log4j Core.");
+  }
+
+  @Test
+  public void rethrowsIfIsClassLoadableFunctionThrowsRuntimeException() {
+    RuntimeException exception = new RuntimeException("expected");
+    DefaultProviderChecker checker =
+        new DefaultProviderChecker(() -> Log4jContextFactory.class, (a) -> {
+          throw exception;
+        }, mock(Logger.class));
+
+    Throwable thrown = catchThrowable(() -> checker.isAvailable());
+
+    assertThat(thrown).isSameAs(exception);
+  }
+
+  @Test
+  public void isClassLoadableReturnsTrueIfClassNameExists() {
+    boolean value = DefaultProviderChecker.isClassLoadable(ClassPathLoader.class.getName());
+
+    assertThat(value).isTrue();
+  }
+
+  @Test
+  public void isClassLoadableReturnsFalseIfClassNameDoesNotExist() {
+    boolean value = DefaultProviderChecker.isClassLoadable("Not a class");
+
+    assertThat(value).isFalse();
+  }
+
+  @Test
+  public void isClassLoadableThrowsNullPointerExceptionIfClassNameIsNull() {
+    Throwable thrown = catchThrowable(() -> DefaultProviderChecker.isClassLoadable(null));
+
+    assertThat(thrown).isInstanceOf(NullPointerException.class);
+  }
+
+  @Test
+  public void isClassLoadableReturnsFalseIfClassNameIsEmpty() {
+    boolean value = DefaultProviderChecker.isClassLoadable("");
+
+    assertThat(value).isFalse();
+  }
+}


[geode] 02/21: Revert "GEODE-6468 [CI Failure] ClusterCommunicationsDUnitTest fails on createEntryAndVerifyUpdate"

Posted by on...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

onichols pushed a commit to branch release/1.9.1
in repository https://gitbox.apache.org/repos/asf/geode.git

commit 5dfad63e4e43a693556c8f2e011b43006ab8e68a
Author: Bruce Schuchardt <bs...@pivotal.io>
AuthorDate: Thu Jun 27 14:34:43 2019 -0700

    Revert "GEODE-6468 [CI Failure] ClusterCommunicationsDUnitTest fails on createEntryAndVerifyUpdate"
    
    This reverts commit 6d1d82a15a5c548b2aafeff8bf023d12044581e7.
---
 .../java/org/apache/geode/ClusterCommunicationsDUnitTest.java | 11 +++++------
 .../main/java/org/apache/geode/internal/tcp/Connection.java   | 10 +++++-----
 2 files changed, 10 insertions(+), 11 deletions(-)

diff --git a/geode-core/src/distributedTest/java/org/apache/geode/ClusterCommunicationsDUnitTest.java b/geode-core/src/distributedTest/java/org/apache/geode/ClusterCommunicationsDUnitTest.java
index 1568740..c970f77 100644
--- a/geode-core/src/distributedTest/java/org/apache/geode/ClusterCommunicationsDUnitTest.java
+++ b/geode-core/src/distributedTest/java/org/apache/geode/ClusterCommunicationsDUnitTest.java
@@ -70,7 +70,6 @@ import org.apache.geode.distributed.internal.membership.gms.membership.GMSJoinLe
 import org.apache.geode.internal.DSFIDFactory;
 import org.apache.geode.internal.cache.DirectReplyMessage;
 import org.apache.geode.test.dunit.Host;
-import org.apache.geode.test.dunit.IgnoredException;
 import org.apache.geode.test.dunit.Invoke;
 import org.apache.geode.test.dunit.VM;
 import org.apache.geode.test.dunit.rules.DistributedRule;
@@ -127,7 +126,7 @@ public class ClusterCommunicationsDUnitTest implements java.io.Serializable {
   @Rule
   public final SerializableTestName testName = new SerializableTestName();
 
-  private final String regionName = "clusterTestRegion";
+  final String regionName = "clusterTestRegion";
 
   public ClusterCommunicationsDUnitTest(RunConfiguration runConfiguration) {
     this.useSSL = runConfiguration.useSSL;
@@ -142,8 +141,6 @@ public class ClusterCommunicationsDUnitTest implements java.io.Serializable {
       this.useSSL = testWithSSL;
       this.conserveSockets = testWithConserveSocketsTrue;
     });
-    IgnoredException.addIgnoredException("Socket Closed");
-    IgnoredException.addIgnoredException("Remote host closed connection during handshake");
   }
 
   @Test
@@ -189,7 +186,7 @@ public class ClusterCommunicationsDUnitTest implements java.io.Serializable {
       VM.getVM(1).invoke("receive a large direct-reply message", () -> {
         SerialAckedMessageWithBigReply messageWithBigReply = new SerialAckedMessageWithBigReply();
         await().until(() -> {
-          messageWithBigReply.send(Collections.singleton(vm2ID));
+          messageWithBigReply.send(Collections.<DistributedMember>singleton(vm2ID));
           return true;
         });
       });
@@ -231,7 +228,9 @@ public class ClusterCommunicationsDUnitTest implements java.io.Serializable {
     createCacheAndRegion(server2VM, locatorPort);
 
     // roll server1 to the current version
-    server1VM.invoke("stop server1", () -> cache.close());
+    server1VM.invoke("stop server1", () -> {
+      cache.close();
+    });
     server1VM = Host.getHost(0).getVM(VersionManager.CURRENT_VERSION, 1);
     createCacheAndRegion(server1VM, locatorPort);
 
diff --git a/geode-core/src/main/java/org/apache/geode/internal/tcp/Connection.java b/geode-core/src/main/java/org/apache/geode/internal/tcp/Connection.java
index 2ba313e..e659496 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/tcp/Connection.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/tcp/Connection.java
@@ -756,11 +756,6 @@ public class Connection implements Runnable {
   }
 
   private void notifyHandshakeWaiter(boolean success) {
-    if (getConduit().useSSL() && ioFilter != null) {
-      // clear out any remaining handshake bytes
-      ByteBuffer buffer = ioFilter.getUnwrappedBuffer(inputBuffer);
-      buffer.position(0).limit(0);
-    }
     synchronized (this.handshakeSync) {
       if (success) {
         this.handshakeRead = true;
@@ -1593,6 +1588,11 @@ public class Connection implements Runnable {
         }
         asyncClose(false);
         this.owner.removeAndCloseThreadOwnedSockets();
+      } else {
+        if (getConduit().useSSL()) {
+          ByteBuffer buffer = ioFilter.getUnwrappedBuffer(inputBuffer);
+          buffer.position(0).limit(0);
+        }
       }
       releaseInputBuffer();
 


[geode] 07/21: fixing spotless and pmd errors from reverts

Posted by on...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

onichols pushed a commit to branch release/1.9.1
in repository https://gitbox.apache.org/repos/asf/geode.git

commit b4cd0baca4ee7564ee09305ba2b78bda9a16943b
Author: Bruce Schuchardt <bs...@pivotal.io>
AuthorDate: Thu Jun 27 14:59:28 2019 -0700

    fixing spotless and pmd errors from reverts
---
 .../internal/net/SSLSocketIntegrationTest.java     |   1 -
 .../statistics/platform/LinuxProcFsStatistics.java | 178 +++++++++++----------
 .../org/apache/geode/internal/tcp/Buffers.java     |   1 +
 .../org/apache/geode/internal/tcp/Connection.java  |   2 +-
 .../org/apache/geode/internal/tcp/TCPConduit.java  |   3 +
 5 files changed, 95 insertions(+), 90 deletions(-)

diff --git a/geode-core/src/integrationTest/java/org/apache/geode/internal/net/SSLSocketIntegrationTest.java b/geode-core/src/integrationTest/java/org/apache/geode/internal/net/SSLSocketIntegrationTest.java
index 32640d9..bc8b230 100755
--- a/geode-core/src/integrationTest/java/org/apache/geode/internal/net/SSLSocketIntegrationTest.java
+++ b/geode-core/src/integrationTest/java/org/apache/geode/internal/net/SSLSocketIntegrationTest.java
@@ -59,7 +59,6 @@ import org.junit.rules.TestName;
 import org.apache.geode.distributed.internal.DistributionConfig;
 import org.apache.geode.distributed.internal.DistributionConfigImpl;
 import org.apache.geode.internal.security.SecurableCommunicationChannel;
-import org.apache.geode.internal.tcp.ByteBufferInputStream;
 import org.apache.geode.test.dunit.IgnoredException;
 import org.apache.geode.test.junit.categories.MembershipTest;
 
diff --git a/geode-core/src/main/java/org/apache/geode/internal/statistics/platform/LinuxProcFsStatistics.java b/geode-core/src/main/java/org/apache/geode/internal/statistics/platform/LinuxProcFsStatistics.java
index 3f687ca..f5f7d9a 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/statistics/platform/LinuxProcFsStatistics.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/statistics/platform/LinuxProcFsStatistics.java
@@ -64,7 +64,8 @@ public class LinuxProcFsStatistics {
   private static boolean hasProcVmStat;
   @MakeNotStatic
   private static boolean hasDiskStats;
-  static SpaceTokenizer st;
+  @MakeNotStatic
+  static SpaceTokenizer tokenizer;
 
   /** The number of non-process files in /proc */
   @MakeNotStatic
@@ -91,13 +92,13 @@ public class LinuxProcFsStatistics {
     cpuStatSingleton = new CpuStat();
     hasProcVmStat = new File("/proc/vmstat").exists();
     hasDiskStats = new File("/proc/diskstats").exists();
-    st = new SpaceTokenizer();
+    tokenizer = new SpaceTokenizer();
     return 0;
   }
 
   public static void close() { // TODO: was package-protected
     cpuStatSingleton = null;
-    st = null;
+    tokenizer = null;
   }
 
   public static void readyRefresh() { // TODO: was package-protected
@@ -125,10 +126,11 @@ public class LinuxProcFsStatistics {
       if (line == null) {
         return;
       }
-      st.setString(line);
-      st.skipTokens(22);
-      ints[LinuxProcessStats.imageSizeINT] = (int) (st.nextTokenAsLong() / OneMeg);
-      ints[LinuxProcessStats.rssSizeINT] = (int) ((st.nextTokenAsLong() * pageSize) / OneMeg);
+      tokenizer.setString(line);
+      tokenizer.skipTokens(22);
+      ints[LinuxProcessStats.imageSizeINT] = (int) (tokenizer.nextTokenAsLong() / OneMeg);
+      ints[LinuxProcessStats.rssSizeINT] =
+          (int) ((tokenizer.nextTokenAsLong() * pageSize) / OneMeg);
     } catch (NoSuchElementException nsee) {
       // It might just be a case of the process going away while we
       // where trying to get its stats.
@@ -140,7 +142,7 @@ public class LinuxProcFsStatistics {
       // So for now lets just ignore the failure and leave the stats
       // as they are.
     } finally {
-      st.releaseResources();
+      tokenizer.releaseResources();
       if (br != null)
         try {
           br.close();
@@ -215,7 +217,7 @@ public class LinuxProcFsStatistics {
     if (hasProcVmStat) {
       getVmStats(longs);
     }
-    st.releaseResources();
+    tokenizer.releaseResources();
   }
 
   // Example of /proc/loadavg
@@ -230,14 +232,14 @@ public class LinuxProcFsStatistics {
       if (line == null) {
         return;
       }
-      st.setString(line);
-      doubles[LinuxSystemStats.loadAverage1DOUBLE] = st.nextTokenAsDouble();
-      doubles[LinuxSystemStats.loadAverage5DOUBLE] = st.nextTokenAsDouble();
-      doubles[LinuxSystemStats.loadAverage15DOUBLE] = st.nextTokenAsDouble();
+      tokenizer.setString(line);
+      doubles[LinuxSystemStats.loadAverage1DOUBLE] = tokenizer.nextTokenAsDouble();
+      doubles[LinuxSystemStats.loadAverage5DOUBLE] = tokenizer.nextTokenAsDouble();
+      doubles[LinuxSystemStats.loadAverage15DOUBLE] = tokenizer.nextTokenAsDouble();
     } catch (NoSuchElementException nsee) {
     } catch (IOException ioe) {
     } finally {
-      st.releaseResources();
+      tokenizer.releaseResources();
       if (br != null)
         try {
           br.close();
@@ -295,41 +297,41 @@ public class LinuxProcFsStatistics {
       while ((line = br.readLine()) != null) {
         try {
           if (line.startsWith("MemTotal: ")) {
-            st.setString(line);
-            st.skipToken(); // Burn initial token
-            ints[LinuxSystemStats.physicalMemoryINT] = (int) (st.nextTokenAsLong() / 1024);
+            tokenizer.setString(line);
+            tokenizer.skipToken(); // Burn initial token
+            ints[LinuxSystemStats.physicalMemoryINT] = (int) (tokenizer.nextTokenAsLong() / 1024);
           } else if (line.startsWith("MemFree: ")) {
-            st.setString(line);
-            st.skipToken(); // Burn initial token
-            ints[LinuxSystemStats.freeMemoryINT] = (int) (st.nextTokenAsLong() / 1024);
+            tokenizer.setString(line);
+            tokenizer.skipToken(); // Burn initial token
+            ints[LinuxSystemStats.freeMemoryINT] = (int) (tokenizer.nextTokenAsLong() / 1024);
           } else if (line.startsWith("SharedMem: ")) {
-            st.setString(line);
-            st.skipToken(); // Burn initial token
-            ints[LinuxSystemStats.sharedMemoryINT] = (int) (st.nextTokenAsLong() / 1024);
+            tokenizer.setString(line);
+            tokenizer.skipToken(); // Burn initial token
+            ints[LinuxSystemStats.sharedMemoryINT] = (int) (tokenizer.nextTokenAsLong() / 1024);
           } else if (line.startsWith("Buffers: ")) {
-            st.setString(line);
-            st.nextToken(); // Burn initial token
-            ints[LinuxSystemStats.bufferMemoryINT] = (int) (st.nextTokenAsLong() / 1024);
+            tokenizer.setString(line);
+            tokenizer.nextToken(); // Burn initial token
+            ints[LinuxSystemStats.bufferMemoryINT] = (int) (tokenizer.nextTokenAsLong() / 1024);
           } else if (line.startsWith("SwapTotal: ")) {
-            st.setString(line);
-            st.skipToken(); // Burn initial token
-            ints[LinuxSystemStats.allocatedSwapINT] = (int) (st.nextTokenAsLong() / 1024);
+            tokenizer.setString(line);
+            tokenizer.skipToken(); // Burn initial token
+            ints[LinuxSystemStats.allocatedSwapINT] = (int) (tokenizer.nextTokenAsLong() / 1024);
           } else if (line.startsWith("SwapFree: ")) {
-            st.setString(line);
-            st.skipToken(); // Burn initial token
-            ints[LinuxSystemStats.unallocatedSwapINT] = (int) (st.nextTokenAsLong() / 1024);
+            tokenizer.setString(line);
+            tokenizer.skipToken(); // Burn initial token
+            ints[LinuxSystemStats.unallocatedSwapINT] = (int) (tokenizer.nextTokenAsLong() / 1024);
           } else if (line.startsWith("Cached: ")) {
-            st.setString(line);
-            st.skipToken(); // Burn initial token
-            ints[LinuxSystemStats.cachedMemoryINT] = (int) (st.nextTokenAsLong() / 1024);
+            tokenizer.setString(line);
+            tokenizer.skipToken(); // Burn initial token
+            ints[LinuxSystemStats.cachedMemoryINT] = (int) (tokenizer.nextTokenAsLong() / 1024);
           } else if (line.startsWith("Dirty: ")) {
-            st.setString(line);
-            st.skipToken(); // Burn initial token
-            ints[LinuxSystemStats.dirtyMemoryINT] = (int) (st.nextTokenAsLong() / 1024);
+            tokenizer.setString(line);
+            tokenizer.skipToken(); // Burn initial token
+            ints[LinuxSystemStats.dirtyMemoryINT] = (int) (tokenizer.nextTokenAsLong() / 1024);
           } else if (line.startsWith("Inact_dirty: ")) { // 2.4 kernels
-            st.setString(line);
-            st.skipToken(); // Burn initial token
-            ints[LinuxSystemStats.dirtyMemoryINT] = (int) (st.nextTokenAsLong() / 1024);
+            tokenizer.setString(line);
+            tokenizer.skipToken(); // Burn initial token
+            ints[LinuxSystemStats.dirtyMemoryINT] = (int) (tokenizer.nextTokenAsLong() / 1024);
           }
         } catch (NoSuchElementException nsee) {
           // ignore and let that stat not to be updated this time
@@ -337,7 +339,7 @@ public class LinuxProcFsStatistics {
       }
     } catch (IOException ioe) {
     } finally {
-      st.releaseResources();
+      tokenizer.releaseResources();
       if (br != null)
         try {
           br.close();
@@ -362,13 +364,13 @@ public class LinuxProcFsStatistics {
         line = br.readLine();
       } while (line != null && !line.startsWith("TcpExt:"));
 
-      st.setString(line);
-      st.skipTokens(1);
-      long tcpSyncookiesSent = st.nextTokenAsLong();
-      long tcpSyncookiesRecv = st.nextTokenAsLong();
-      st.skipTokens(17);
-      long tcpListenOverflows = st.nextTokenAsLong();
-      long tcpListenDrops = st.nextTokenAsLong();
+      tokenizer.setString(line);
+      tokenizer.skipTokens(1);
+      long tcpSyncookiesSent = tokenizer.nextTokenAsLong();
+      long tcpSyncookiesRecv = tokenizer.nextTokenAsLong();
+      tokenizer.skipTokens(17);
+      long tcpListenOverflows = tokenizer.nextTokenAsLong();
+      long tcpListenDrops = tokenizer.nextTokenAsLong();
 
       longs[LinuxSystemStats.tcpExtSynCookiesRecvLONG] = tcpSyncookiesRecv;
       longs[LinuxSystemStats.tcpExtSynCookiesSentLONG] = tcpSyncookiesSent;
@@ -380,8 +382,8 @@ public class LinuxProcFsStatistics {
         isr = new InputStreamReader(new FileInputStream("/proc/sys/net/core/somaxconn"));
         br = new BufferedReader(isr);
         line = br.readLine();
-        st.setString(line);
-        soMaxConn = st.nextTokenAsInt();
+        tokenizer.setString(line);
+        soMaxConn = tokenizer.nextTokenAsInt();
         soMaxConnProcessed = true;
       }
 
@@ -390,7 +392,7 @@ public class LinuxProcFsStatistics {
     } catch (NoSuchElementException nsee) {
     } catch (IOException ioe) {
     } finally {
-      st.releaseResources();
+      tokenizer.releaseResources();
       if (br != null) {
         try {
           br.close();
@@ -423,18 +425,18 @@ public class LinuxProcFsStatistics {
       while ((line = br.readLine()) != null) {
         int index = line.indexOf(":");
         boolean isloopback = (line.indexOf("lo:") != -1);
-        st.setString(line.substring(index + 1).trim());
-        long recv_bytes = st.nextTokenAsLong();
-        long recv_packets = st.nextTokenAsLong();
-        long recv_errs = st.nextTokenAsLong();
-        long recv_drop = st.nextTokenAsLong();
-        st.skipTokens(4); // fifo, frame, compressed, multicast
-        long xmit_bytes = st.nextTokenAsLong();
-        long xmit_packets = st.nextTokenAsLong();
-        long xmit_errs = st.nextTokenAsLong();
-        long xmit_drop = st.nextTokenAsLong();
-        st.skipToken(); // fifo
-        long xmit_colls = st.nextTokenAsLong();
+        tokenizer.setString(line.substring(index + 1).trim());
+        long recv_bytes = tokenizer.nextTokenAsLong();
+        long recv_packets = tokenizer.nextTokenAsLong();
+        long recv_errs = tokenizer.nextTokenAsLong();
+        long recv_drop = tokenizer.nextTokenAsLong();
+        tokenizer.skipTokens(4); // fifo, frame, compressed, multicast
+        long xmit_bytes = tokenizer.nextTokenAsLong();
+        long xmit_packets = tokenizer.nextTokenAsLong();
+        long xmit_errs = tokenizer.nextTokenAsLong();
+        long xmit_drop = tokenizer.nextTokenAsLong();
+        tokenizer.skipToken(); // fifo
+        long xmit_colls = tokenizer.nextTokenAsLong();
 
         if (isloopback) {
           lo_recv_packets = recv_packets;
@@ -471,7 +473,7 @@ public class LinuxProcFsStatistics {
     } catch (NoSuchElementException nsee) {
     } catch (IOException ioe) {
     } finally {
-      st.releaseResources();
+      tokenizer.releaseResources();
       if (br != null)
         try {
           br.close();
@@ -531,22 +533,22 @@ public class LinuxProcFsStatistics {
         br.readLine(); // Discard header info
       }
       while ((line = br.readLine()) != null) {
-        st.setString(line);
+        tokenizer.setString(line);
         {
           // " 8 1 sdb" on 2.6
           // " 8 1 452145145 sdb" on 2.4
-          String tok = st.nextToken();
+          String tok = tokenizer.nextToken();
           if (tok.length() == 0 || Character.isWhitespace(tok.charAt(0))) {
             // skip over first token since it is whitespace
-            tok = st.nextToken();
+            tok = tokenizer.nextToken();
           }
           // skip first token it is some number
-          tok = st.nextToken();
+          tok = tokenizer.nextToken();
           // skip second token it is some number
-          tok = st.nextToken();
+          tok = tokenizer.nextToken();
           if (!hasDiskStats) {
             // skip third token it is some number
-            tok = st.nextToken();
+            tok = tokenizer.nextToken();
           }
           // Now tok should be the device name.
           if (Character.isDigit(tok.charAt(tok.length() - 1))) {
@@ -555,20 +557,20 @@ public class LinuxProcFsStatistics {
             continue;
           }
         }
-        long tmp_readsCompleted = st.nextTokenAsLong();
-        long tmp_readsMerged = st.nextTokenAsLong();
-        long tmp_sectorsRead = st.nextTokenAsLong();
-        long tmp_timeReading = st.nextTokenAsLong();
-        if (st.hasMoreTokens()) {
+        long tmp_readsCompleted = tokenizer.nextTokenAsLong();
+        long tmp_readsMerged = tokenizer.nextTokenAsLong();
+        long tmp_sectorsRead = tokenizer.nextTokenAsLong();
+        long tmp_timeReading = tokenizer.nextTokenAsLong();
+        if (tokenizer.hasMoreTokens()) {
           // If we are on 2.6 then we might only have 4 longs; if so ignore this line
           // Otherwise we should have 11 long tokens.
-          long tmp_writesCompleted = st.nextTokenAsLong();
-          long tmp_writesMerged = st.nextTokenAsLong();
-          long tmp_sectorsWritten = st.nextTokenAsLong();
-          long tmp_timeWriting = st.nextTokenAsLong();
-          long tmp_iosInProgress = st.nextTokenAsLong();
-          long tmp_timeIosInProgress = st.nextTokenAsLong();
-          long tmp_ioTime = st.nextTokenAsLong();
+          long tmp_writesCompleted = tokenizer.nextTokenAsLong();
+          long tmp_writesMerged = tokenizer.nextTokenAsLong();
+          long tmp_sectorsWritten = tokenizer.nextTokenAsLong();
+          long tmp_timeWriting = tokenizer.nextTokenAsLong();
+          long tmp_iosInProgress = tokenizer.nextTokenAsLong();
+          long tmp_timeIosInProgress = tokenizer.nextTokenAsLong();
+          long tmp_ioTime = tokenizer.nextTokenAsLong();
           readsCompleted += tmp_readsCompleted;
           readsMerged += tmp_readsMerged;
           sectorsRead += tmp_sectorsRead;
@@ -599,7 +601,7 @@ public class LinuxProcFsStatistics {
       // NoSuchElementException line=" + line, nsee);
     } catch (IOException ioe) {
     } finally {
-      st.releaseResources();
+      tokenizer.releaseResources();
       if (br != null)
         try {
           br.close();
@@ -708,8 +710,8 @@ public class LinuxProcFsStatistics {
     }
 
     public int[] calculateStats(String newStatLine) {
-      st.setString(newStatLine);
-      st.skipToken(); // cpu name
+      tokenizer.setString(newStatLine);
+      tokenizer.skipToken(); // cpu name
       final int MAX_CPU_STATS = CPU.values().length;
       /*
        * newer kernels now have 10 columns for cpu in /proc/stat. This number may increase even
@@ -722,8 +724,8 @@ public class LinuxProcFsStatistics {
       int actualCpuStats = 0;
       long unaccountedCpuUtilization = 0;
 
-      while (st.hasMoreTokens()) {
-        newStats.add(st.nextTokenAsLong());
+      while (tokenizer.hasMoreTokens()) {
+        newStats.add(tokenizer.nextTokenAsLong());
         actualCpuStats++;
       }
 
diff --git a/geode-core/src/main/java/org/apache/geode/internal/tcp/Buffers.java b/geode-core/src/main/java/org/apache/geode/internal/tcp/Buffers.java
index b0f5612..fda7c0e 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/tcp/Buffers.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/tcp/Buffers.java
@@ -28,6 +28,7 @@ public class Buffers {
   /**
    * A list of soft references to byte buffers.
    */
+  @MakeNotStatic
   private static final ConcurrentLinkedQueue bufferQueue = new ConcurrentLinkedQueue();
 
   /**
diff --git a/geode-core/src/main/java/org/apache/geode/internal/tcp/Connection.java b/geode-core/src/main/java/org/apache/geode/internal/tcp/Connection.java
index 47e90a2..395130f 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/tcp/Connection.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/tcp/Connection.java
@@ -48,7 +48,6 @@ import java.util.concurrent.atomic.AtomicLong;
 import org.apache.logging.log4j.Logger;
 
 import org.apache.geode.CancelException;
-import org.apache.geode.SerializationException;
 import org.apache.geode.SystemFailure;
 import org.apache.geode.annotations.internal.MakeNotStatic;
 import org.apache.geode.annotations.internal.MutableForTesting;
@@ -98,6 +97,7 @@ public class Connection implements Runnable {
   @MakeNotStatic
   private static final int INITIAL_CAPACITY =
       Integer.getInteger("p2p.readerBufferSize", 32768).intValue();
+  @MakeNotStatic
   private static int P2P_CONNECT_TIMEOUT;
   @MakeNotStatic
   private static boolean IS_P2P_CONNECT_TIMEOUT_INITIALIZED = false;
diff --git a/geode-core/src/main/java/org/apache/geode/internal/tcp/TCPConduit.java b/geode-core/src/main/java/org/apache/geode/internal/tcp/TCPConduit.java
index c6b8bf9..fbda5b7 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/tcp/TCPConduit.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/tcp/TCPConduit.java
@@ -110,6 +110,7 @@ public class TCPConduit implements Runnable {
   /**
    * use javax.net.ssl.SSLServerSocketFactory?
    */
+  @MakeNotStatic
   static boolean useSSL;
 
   /**
@@ -117,11 +118,13 @@ public class TCPConduit implements Runnable {
    * java VM, NIO cannot be used with IPv6 addresses on Windows. When that condition holds, the
    * useNIO flag must be disregarded.
    */
+  @MakeNotStatic
   private static boolean USE_NIO;
 
   /**
    * use direct ByteBuffers instead of heap ByteBuffers for NIO operations
    */
+  @MakeNotStatic
   static boolean useDirectBuffers;
 
   /**


[geode] 20/21: GEODE-6734: Change packer image resources and scripts to Bionic

Posted by on...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

onichols pushed a commit to branch release/1.9.1
in repository https://gitbox.apache.org/repos/asf/geode.git

commit 5a746977829171bcb841582ede755de9865a9f83
Author: Robert Houghton <rh...@pivotal.io>
AuthorDate: Wed May 1 10:11:50 2019 -0700

    GEODE-6734: Change packer image resources and scripts to Bionic
    
    Authored-by: Robert Houghton <rh...@pivotal.io>
---
 ci/images/google-geode-builder/packer.json      |  2 +-
 ci/images/google-geode-builder/scripts/setup.sh | 13 +++++--------
 ci/pipelines/meta/deploy_meta.sh                |  7 +++----
 3 files changed, 9 insertions(+), 13 deletions(-)

diff --git a/ci/images/google-geode-builder/packer.json b/ci/images/google-geode-builder/packer.json
index 4c905da..2d2167f 100644
--- a/ci/images/google-geode-builder/packer.json
+++ b/ci/images/google-geode-builder/packer.json
@@ -32,7 +32,7 @@
     {
       "type": "googlecompute",
       "project_id": "{{user `gcp_project`}}",
-      "source_image_family": "debian-9",
+      "source_image_family": "ubuntu-minimal-1804-lts",
       "ssh_username": "packer",
       "zone": "us-central1-f",
       "image_family": "{{user `pipeline_prefix`}}geode-builder",
diff --git a/ci/images/google-geode-builder/scripts/setup.sh b/ci/images/google-geode-builder/scripts/setup.sh
index af4ca30..11fbf4a 100755
--- a/ci/images/google-geode-builder/scripts/setup.sh
+++ b/ci/images/google-geode-builder/scripts/setup.sh
@@ -28,11 +28,11 @@ apt-get install -y --no-install-recommends \
   lsb-release
 
 echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" > /etc/apt/sources.list.d/google-chrome.list
-echo "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
+echo "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
 curl -sSL https://dl.google.com/linux/linux_signing_key.pub | apt-key add -
-curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
+curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
 apt-get update
-apt-get purge -y google-cloud-sdk lxc-docker
+set +e && apt-get purge -y google-cloud-sdk lxc-docker && set -e
 apt-get install -y --no-install-recommends \
     aptitude \
     ca-certificates \
@@ -55,14 +55,11 @@ apt-get install -y --no-install-recommends \
 
 cp -R /etc/alternatives /etc/keep-alternatives
 apt-get install -y --no-install-recommends \
-    openjdk-8-jdk
+    openjdk-8-jdk \
+    openjdk-11-jdk
 rm -rf /etc/alternatives
 mv /etc/keep-alternatives /etc/alternatives
 
-JDK_URL=$(curl -Ls http://jdk.java.net/11 | awk '/linux-x64/{sub(/.*href=./,"");sub(/".*/,"");if(found!=1)print;found=1}')
-tar xzf <(curl -s $JDK_URL) -C /usr/lib/jvm
-mv /usr/lib/jvm/jdk-11* /usr/lib/jvm/java-11-openjdk-amd64
-
 pushd /tmp
   curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-${CLOUD_SDK_VERSION}-linux-x86_64.tar.gz
   tar xzf google-cloud-sdk-${CLOUD_SDK_VERSION}-linux-x86_64.tar.gz -C /
diff --git a/ci/pipelines/meta/deploy_meta.sh b/ci/pipelines/meta/deploy_meta.sh
index aaa153b..46bf3f6 100755
--- a/ci/pipelines/meta/deploy_meta.sh
+++ b/ci/pipelines/meta/deploy_meta.sh
@@ -123,13 +123,12 @@ YML
     --var concourse-team=main \
     --yaml-var public-pipelines=${PUBLIC} 2>&1 |tee flyOutput.log
 
+  if [[ "$(tail -n1 flyOutput.log)" == "bailing out" ]]; then
+    exit 1
+  fi
 popd 2>&1 > /dev/null
 
 
-if [[ "$(tail -n1 flyOutput.log)" == "bailing out" ]]; then
-  exit 1
-fi
-
 # bootstrap all precursors of the actual Build job
 
 function jobStatus {


[geode] 19/21: [GEODE-7027] Use cygwin to get rsync instead of chocolatey directly. (#3863)

Posted by on...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

onichols pushed a commit to branch release/1.9.1
in repository https://gitbox.apache.org/repos/asf/geode.git

commit be95b69938e1b479c9cd788eb939b0dcd8347efd
Author: Sean Goller <sg...@pivotal.io>
AuthorDate: Tue Jul 30 11:17:31 2019 -0700

    [GEODE-7027] Use cygwin to get rsync instead of chocolatey directly. (#3863)
    
    Authored-by: Sean Goller <sg...@pivotal.io>
    (cherry picked from commit a37a43a98ddac2bdda3090682c8f74bd400c37f8)
---
 .../google-windows-geode-builder/windows-packer.json    | 17 ++++++-----------
 1 file changed, 6 insertions(+), 11 deletions(-)

diff --git a/ci/images/google-windows-geode-builder/windows-packer.json b/ci/images/google-windows-geode-builder/windows-packer.json
index c364d3c..fa3b2f5 100644
--- a/ci/images/google-windows-geode-builder/windows-packer.json
+++ b/ci/images/google-windows-geode-builder/windows-packer.json
@@ -55,26 +55,21 @@
         "$ErrorActionPreference = \"Stop\"",
         "Set-ExecutionPolicy Bypass -Scope Process -Force",
         "Invoke-WebRequest https://chocolatey.org/install.ps1 -UseBasicParsing | Invoke-Expression",
-        "choco install -y git rsync adoptopenjdk11",
+        "choco install -y git cygwin cyg-get adoptopenjdk11",
         "Move-Item \"C:\\Program Files\\AdoptOpenJDK\\jdk-11*\" c:\\java11",
         "choco install -y jdk8 -params 'installdir=c:\\\\java8tmp;source=false'",
         "Move-Item \"C:\\java8tmp\" c:\\java8",
         "choco install -y openssh --version 7.7.2.1 /SSHServerFeature",
         "refreshenv",
-        "$a = 10",
-        "do {",
-        "write-output \">>>>>>>>>> Installing rsync: $a attempts remaining <<<<<<<<<<\"",
-        "choco install -y rsync",
-        "$a--",
-        "} while (-not (test-path C:\\ProgramData\\chocolatey\\bin\\rsync.exe) -and $a -gt 0)",
-        "get-item C:\\ProgramData\\chocolatey\\bin\\rsync.exe",
+        "$OldPath = (Get-ItemProperty -Path 'Registry::HKEY_LOCAL_MACHINE\\System\\CurrentControlSet\\Control\\Session Manager\\Environment' -Name PATH).Path",
+        "$NewPath = $OldPath + ';' + 'c:\\Program Files\\Git\\bin' + ';' + 'c:\\tools\\cygwin\\bin'",
+        "Set-ItemProperty -Path 'Registry::HKEY_LOCAL_MACHINE\\System\\CurrentControlSet\\Control\\Session Manager\\Environment' -Name PATH -Value $NewPath",
+        "refreshenv",
+        "cyg-get rsync",
         "winrm set winrm/config/service '@{AllowUnencrypted=\"true\"}'",
         "New-NetFirewallRule -DisplayName sshd -Direction inbound -Action allow -Protocol tcp -LocalPort 22",
         "New-NetFirewallRule -DisplayName \"Docker containers\" -LocalAddress 172.0.0.0/8 -Action allow -Direction inbound",
         "New-Service -name sshd -description 'OpenSSH sshd server' -binarypathname 'c:\\Program Files\\OpenSSH-Win64\\sshd.exe' -startuptype automatic",
-        "$OldPath = (Get-ItemProperty -Path 'Registry::HKEY_LOCAL_MACHINE\\System\\CurrentControlSet\\Control\\Session Manager\\Environment' -Name PATH).Path",
-        "$NewPath = $OldPath + ';' + 'c:\\Program Files\\Git\\bin'",
-        "Set-ItemProperty -Path 'Registry::HKEY_LOCAL_MACHINE\\System\\CurrentControlSet\\Control\\Session Manager\\Environment' -Name PATH -Value $NewPath",
         "write-output '>>>>>>>>>> Modify sshd config to comment use of administrators authorized key file <<<<<<<<<<'",
         "(Get-Content \"C:\\Program Files\\OpenSSH-Win64\\sshd_config_default\") -replace '(Match Group administrators)', '#$1' -replace '(\\s*AuthorizedKeysFile.*)', '#$1' | Out-File \"C:\\Program Files\\OpenSSH-Win64\\sshd_config_default\" -encoding UTF8"
       ]


[geode] 21/21: Remove unattended-upgrades and autoremove unnecessary stuff. (#3881)

Posted by on...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

onichols pushed a commit to branch release/1.9.1
in repository https://gitbox.apache.org/repos/asf/geode.git

commit 8e541c517948f17053d6d9dfebed06c25d4546fa
Author: Sean Goller <se...@goller.net>
AuthorDate: Fri Aug 2 12:33:30 2019 -0700

    Remove unattended-upgrades and autoremove unnecessary stuff. (#3881)
---
 ci/images/google-geode-builder/scripts/setup.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ci/images/google-geode-builder/scripts/setup.sh b/ci/images/google-geode-builder/scripts/setup.sh
index 11fbf4a..16a38f4 100755
--- a/ci/images/google-geode-builder/scripts/setup.sh
+++ b/ci/images/google-geode-builder/scripts/setup.sh
@@ -87,6 +87,6 @@ ln -fs /opt/selenium/chromedriver-${CHROME_DRIVER_VERSION} /usr/bin/chromedriver
 adduser --disabled-password --gecos "" --uid ${LOCAL_UID} ${LOCAL_USER}
 usermod -G docker,google-sudoers -a ${LOCAL_USER}
 echo "export PATH=/google-cloud-sdk/bin:${PATH}" > /etc/profile.d/google_sdk_path.sh
-
+apt-get remove -y unattended-upgrades && apt-get -y autoremove
 apt-get clean
 rm -rf /var/lib/apt/lists/*


[geode] 08/21: GEODE-7058: Mark log4j-core optional in geode-core

Posted by on...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

onichols pushed a commit to branch release/1.9.1
in repository https://gitbox.apache.org/repos/asf/geode.git

commit dea713637396e6db494dbf9d14135bef656d2699
Author: Kirk Lund <kl...@apache.org>
AuthorDate: Wed Aug 7 14:33:21 2019 -0700

    GEODE-7058: Mark log4j-core optional in geode-core
    
    Note: this change requires all commits from GEODE-2644 and GEODE-6122.
    (cherry picked from commit 413800bc16d05df689a2af5c30797f180aad6088)
---
 geode-core/build.gradle                        | 4 +++-
 geode-core/src/test/resources/expected-pom.xml | 1 +
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/geode-core/build.gradle b/geode-core/build.gradle
index caf08a5..12b6b92 100755
--- a/geode-core/build.gradle
+++ b/geode-core/build.gradle
@@ -215,7 +215,9 @@ dependencies {
   compile('net.sf.jopt-simple:jopt-simple')
 
   compile('org.apache.logging.log4j:log4j-api')
-  compile('org.apache.logging.log4j:log4j-core')
+  compile('org.apache.logging.log4j:log4j-core') {
+    ext.optional = true
+  }
 
   runtimeOnly('org.fusesource.jansi:jansi') {
     ext.optional = true
diff --git a/geode-core/src/test/resources/expected-pom.xml b/geode-core/src/test/resources/expected-pom.xml
index 604bbe5..c0dfba5 100644
--- a/geode-core/src/test/resources/expected-pom.xml
+++ b/geode-core/src/test/resources/expected-pom.xml
@@ -199,6 +199,7 @@
       <groupId>org.apache.logging.log4j</groupId>
       <artifactId>log4j-core</artifactId>
       <scope>compile</scope>
+      <optional>true</optional>
     </dependency>
     <dependency>
       <groupId>org.eclipse.jetty</groupId>


[geode] 14/21: Removing lines we hopefully don't need anymore. (#3613)

Posted by on...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

onichols pushed a commit to branch release/1.9.1
in repository https://gitbox.apache.org/repos/asf/geode.git

commit 558a4e4625924fda9e066e7e2c717faeba14317b
Author: Sean Goller <sg...@pivotal.io>
AuthorDate: Tue May 21 11:50:22 2019 -0700

    Removing lines we hopefully don't need anymore. (#3613)
    
    Authored-by: Sean Goller <sg...@pivotal.io>
    (cherry picked from commit 798b022d46593c52ec2577301af01abebc587eda)
---
 ci/images/google-windows-geode-builder/windows-packer.json | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/ci/images/google-windows-geode-builder/windows-packer.json b/ci/images/google-windows-geode-builder/windows-packer.json
index d2c96f7..0c666b0 100644
--- a/ci/images/google-windows-geode-builder/windows-packer.json
+++ b/ci/images/google-windows-geode-builder/windows-packer.json
@@ -64,9 +64,6 @@
         "$OldPath = (Get-ItemProperty -Path 'Registry::HKEY_LOCAL_MACHINE\\System\\CurrentControlSet\\Control\\Session Manager\\Environment' -Name PATH).Path",
         "$NewPath = $OldPath + ';' + 'c:\\Program Files\\Git\\bin'",
         "Set-ItemProperty -Path 'Registry::HKEY_LOCAL_MACHINE\\System\\CurrentControlSet\\Control\\Session Manager\\Environment' -Name PATH -Value $NewPath",
-        "Get-ChildItem -Path \"C:\\Program Files\\Git\\bin\" -Recurse -Include *exe | %{ Set-ProcessMitigation -Name $_.Name -Disable ForceRelocateASLR,ForceRelocate }",
-        "Get-ChildItem -Path \"C:\\ProgramData\\chocolatey\" -Recurse -Include *exe | %{ Set-ProcessMitigation -Name $_.Name -Disable ForceRelocateASLR,ForceRelocate }",
-
         "write-output '>>>>>>>>>> Modify sshd config to comment use of administrators authorized key file <<<<<<<<<<'",
         "(Get-Content \"C:\\Program Files\\OpenSSH-Win64\\sshd_config_default\") -replace '(Match Group administrators)', '#$1' -replace '(\\s*AuthorizedKeysFile.*)', '#$1' | Out-File \"C:\\Program Files\\OpenSSH-Win64\\sshd_config_default\" -encoding UTF8",
         "write-output '>>>>>>>>>> Adding openjdk docker image <<<<<<<<<<'",


[geode] 15/21: Update windows image and tweaks to support it. (#3649)

Posted by on...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

onichols pushed a commit to branch release/1.9.1
in repository https://gitbox.apache.org/repos/asf/geode.git

commit d33bda5a68de47e406718174f1eeba8f33058f65
Author: Sean Goller <sg...@pivotal.io>
AuthorDate: Thu May 30 11:06:52 2019 -0700

    Update windows image and tweaks to support it. (#3649)
    
    * Something changed in awt.dll in Java 11 and integration
      tests no longer complete under Windows Server Core.
    * Use Windows Server 2016, install docker and container support.
    * Fix execute_tests.sh so that DUnit options are not included when
      running tests that are not using DUnit.
    
    Authored-by: Sean Goller <sg...@pivotal.io>
    (cherry picked from commit ec08f36920d101dd34c4b4c40c720855fa8fe975)
---
 .../windows-packer.json                            | 25 ++++++++++++++++------
 ci/scripts/execute_tests.sh                        |  3 +--
 2 files changed, 20 insertions(+), 8 deletions(-)

diff --git a/ci/images/google-windows-geode-builder/windows-packer.json b/ci/images/google-windows-geode-builder/windows-packer.json
index 0c666b0..4c6ee0c 100644
--- a/ci/images/google-windows-geode-builder/windows-packer.json
+++ b/ci/images/google-windows-geode-builder/windows-packer.json
@@ -17,9 +17,9 @@
       "project_id": "{{user `gcp_project`}}",
       "network": "{{user `gcp_network`}}",
       "subnetwork": "{{user `gcp_subnetwork`}}",
-      "source_image_family": "windows-1809-core-for-containers",
+      "source_image_family": "windows-2016",
       "disk_size": "100",
-      "machine_type": "n1-standard-1",
+      "machine_type": "n1-standard-4",
       "communicator": "winrm",
       "winrm_username": "geode",
       "winrm_insecure": true,
@@ -40,7 +40,20 @@
       "inline": [
         "$ErrorActionPreference = \"Stop\"",
         "Set-ExecutionPolicy Bypass -Scope Process -Force",
-
+        "Install-WindowsFeature Containers",
+        "Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force",
+        "Install-Module DockerMsftProvider -Force",
+        "Install-Package Docker -ProviderName DockerMsftProvider -Force"
+        ]
+    },
+    {
+      "type": "windows-restart"
+    },
+    {
+      "type": "powershell",
+      "inline": [
+        "$ErrorActionPreference = \"Stop\"",
+        "Set-ExecutionPolicy Bypass -Scope Process -Force",
         "Invoke-WebRequest https://chocolatey.org/install.ps1 -UseBasicParsing | Invoke-Expression",
         "choco install -y git rsync adoptopenjdk11",
         "Move-Item \"C:\\Program Files\\AdoptOpenJDK\\jdk-11*\" c:\\java11",
@@ -67,10 +80,10 @@
         "write-output '>>>>>>>>>> Modify sshd config to comment use of administrators authorized key file <<<<<<<<<<'",
         "(Get-Content \"C:\\Program Files\\OpenSSH-Win64\\sshd_config_default\") -replace '(Match Group administrators)', '#$1' -replace '(\\s*AuthorizedKeysFile.*)', '#$1' | Out-File \"C:\\Program Files\\OpenSSH-Win64\\sshd_config_default\" -encoding UTF8",
         "write-output '>>>>>>>>>> Adding openjdk docker image <<<<<<<<<<'",
-        "docker pull openjdk:8u181-jdk-windowsservercore-1709",
+        "docker pull openjdk:8u212-jdk-windowsservercore-1809",
         "write-output '>>>>>>>>>> Removing unused docker images <<<<<<<<<<'",
-        "docker rmi microsoft/windowsservercore:1709",
-        "docker rmi microsoft/nanoserver:1709",
+        "docker rmi microsoft/windowsservercore:1809",
+        "docker rmi microsoft/nanoserver:1809",
 
         "Set-Content -Path c:\\ProgramData\\docker\\config\\daemon.json -Value '{ \"hosts\": [\"tcp://0.0.0.0:2375\", \"npipe://\"] }'",
 
diff --git a/ci/scripts/execute_tests.sh b/ci/scripts/execute_tests.sh
index 42c4e1b..515a9c6 100755
--- a/ci/scripts/execute_tests.sh
+++ b/ci/scripts/execute_tests.sh
@@ -54,7 +54,7 @@ scp ${SSH_OPTIONS} ${SCRIPTDIR}/capture-call-stacks.sh geode@${INSTANCE_IP_ADDRE
 
 
 if [[ -n "${PARALLEL_DUNIT}" && "${PARALLEL_DUNIT}" == "true" ]]; then
-  PARALLEL_DUNIT="-PparallelDunit -PdunitDockerUser=geode"
+  PARALLEL_DUNIT="-PparallelDunit -PdunitDockerUser=geode -PdunitDockerImage=\$(docker images --format '{{.Repository}}:{{.Tag}}')"
   if [ -n "${DUNIT_PARALLEL_FORKS}" ]; then
     DUNIT_PARALLEL_FORKS="-PdunitParallelForks=${DUNIT_PARALLEL_FORKS}"
   fi
@@ -90,7 +90,6 @@ GRADLE_ARGS=" \
     -PtestJVMVer=${JAVA_TEST_VERSION} \
     ${PARALLEL_DUNIT} \
     ${DUNIT_PARALLEL_FORKS} \
-    -PdunitDockerImage=\$(docker images --format '{{.Repository}}:{{.Tag}}') \
     ${DEFAULT_GRADLE_TASK_OPTIONS} \
     ${GRADLE_SKIP_TASK_OPTIONS} \
     ${GRADLE_TASK} \


[geode] 03/21: Revert "GEODE-6389 CI Failure: ConcurrentWANPropagation_1_DUnitTest.testReplicatedSerialPropagation_withoutRemoteSite"

Posted by on...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

onichols pushed a commit to branch release/1.9.1
in repository https://gitbox.apache.org/repos/asf/geode.git

commit 7b956f5653a67906ef2f6f86d683242ed5096052
Author: Bruce Schuchardt <bs...@pivotal.io>
AuthorDate: Thu Jun 27 14:34:58 2019 -0700

    Revert "GEODE-6389 CI Failure: ConcurrentWANPropagation_1_DUnitTest.testReplicatedSerialPropagation_withoutRemoteSite"
    
    This reverts commit 71dacf6a6f8de535c07be4584ee3d054a41b10e3.
---
 .../org/apache/geode/internal/net/NioPlainEngine.java |  7 +------
 .../org/apache/geode/internal/net/NioSslEngine.java   | 10 +++-------
 .../org/apache/geode/internal/tcp/Connection.java     | 19 +++----------------
 .../java/org/apache/geode/internal/tcp/MsgReader.java | 13 ++++++-------
 4 files changed, 13 insertions(+), 36 deletions(-)

diff --git a/geode-core/src/main/java/org/apache/geode/internal/net/NioPlainEngine.java b/geode-core/src/main/java/org/apache/geode/internal/net/NioPlainEngine.java
index 8a3e3fb..972c854 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/net/NioPlainEngine.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/net/NioPlainEngine.java
@@ -55,12 +55,7 @@ public class NioPlainEngine implements NioFilter {
       Buffers.BufferType bufferType, DMStats stats) {
     ByteBuffer buffer = wrappedBuffer;
 
-    if (buffer == null) {
-      buffer = Buffers.acquireBuffer(bufferType, amount, stats);
-      buffer.clear();
-      lastProcessedPosition = 0;
-      lastReadPosition = 0;
-    } else if (buffer.capacity() > amount) {
+    if (buffer.capacity() > amount) {
       // we already have a buffer that's big enough
       if (buffer.capacity() - lastProcessedPosition < amount) {
         buffer.limit(lastReadPosition);
diff --git a/geode-core/src/main/java/org/apache/geode/internal/net/NioSslEngine.java b/geode-core/src/main/java/org/apache/geode/internal/net/NioSslEngine.java
index dd71d75..14c32fa 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/net/NioSslEngine.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/net/NioSslEngine.java
@@ -311,14 +311,10 @@ public class NioSslEngine implements NioFilter {
   @Override
   public ByteBuffer ensureWrappedCapacity(int amount, ByteBuffer wrappedBuffer,
       Buffers.BufferType bufferType, DMStats stats) {
-    ByteBuffer buffer = wrappedBuffer;
-    int requiredSize = engine.getSession().getPacketBufferSize();
-    if (buffer == null) {
-      buffer = Buffers.acquireBuffer(bufferType, requiredSize, stats);
-    } else if (buffer.capacity() < requiredSize) {
-      buffer = Buffers.expandWriteBufferIfNeeded(bufferType, buffer, requiredSize, stats);
+    if (wrappedBuffer == null) {
+      wrappedBuffer = Buffers.acquireBuffer(bufferType, amount, stats);
     }
-    return buffer;
+    return wrappedBuffer;
   }
 
   @Override
diff --git a/geode-core/src/main/java/org/apache/geode/internal/tcp/Connection.java b/geode-core/src/main/java/org/apache/geode/internal/tcp/Connection.java
index e659496..7fcbee5 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/tcp/Connection.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/tcp/Connection.java
@@ -779,6 +779,8 @@ public class Connection implements Runnable {
     // we do the close in a background thread because the operation may hang if
     // there is a problem with the network. See bug #46659
 
+    releaseInputBuffer();
+
     // if simulating sickness, sockets must be closed in-line so that tests know
     // that the vm is sick when the beSick operation completes
     if (beingSickForTests) {
@@ -1446,11 +1448,6 @@ public class Connection implements Runnable {
         }
         // make sure our socket is closed
         asyncClose(false);
-        if (!this.isReceiver) {
-          // receivers release the input buffer when exiting run(). Senders use the
-          // inputBuffer for reading direct-reply responses
-          releaseInputBuffer();
-        }
         lengthSet = false;
       } // synchronized
 
@@ -1588,14 +1585,7 @@ public class Connection implements Runnable {
         }
         asyncClose(false);
         this.owner.removeAndCloseThreadOwnedSockets();
-      } else {
-        if (getConduit().useSSL()) {
-          ByteBuffer buffer = ioFilter.getUnwrappedBuffer(inputBuffer);
-          buffer.position(0).limit(0);
-        }
       }
-      releaseInputBuffer();
-
       // make sure that if the reader thread exits we notify a thread waiting
       // for the handshake.
       // see bug 37524 for an example of listeners hung in waitForHandshake
@@ -2838,7 +2828,7 @@ public class Connection implements Runnable {
     DMStats stats = owner.getConduit().getStats();
     final Version version = getRemoteVersion();
     try {
-      msgReader = new MsgReader(this, ioFilter, version);
+      msgReader = new MsgReader(this, ioFilter, getInputBuffer(), version);
 
       Header header = msgReader.readHeader();
 
@@ -2913,9 +2903,6 @@ public class Connection implements Runnable {
             getRemoteAddress());
         this.ackTimedOut = false;
       }
-      if (msgReader != null) {
-        msgReader.close();
-      }
     }
     synchronized (stateLock) {
       this.connectionState = STATE_RECEIVED_ACK;
diff --git a/geode-core/src/main/java/org/apache/geode/internal/tcp/MsgReader.java b/geode-core/src/main/java/org/apache/geode/internal/tcp/MsgReader.java
index afb0272..adf0305 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/tcp/MsgReader.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/tcp/MsgReader.java
@@ -46,9 +46,14 @@ public class MsgReader {
 
 
 
-  MsgReader(Connection conn, NioFilter nioFilter, Version version) {
+  MsgReader(Connection conn, NioFilter nioFilter, ByteBuffer peerNetData, Version version) {
     this.conn = conn;
     this.ioFilter = nioFilter;
+    this.peerNetData = peerNetData;
+    if (conn.getConduit().useSSL()) {
+      ByteBuffer buffer = ioFilter.getUnwrappedBuffer(peerNetData);
+      buffer.position(0).limit(0);
+    }
     this.byteBufferInputStream =
         version == null ? new ByteBufferInputStream() : new VersionedByteBufferInputStream(version);
   }
@@ -129,12 +134,6 @@ public class MsgReader {
     return ioFilter.readAtLeast(conn.getSocket().getChannel(), bytes, peerNetData, getStats());
   }
 
-  public void close() {
-    if (peerNetData != null) {
-      Buffers.releaseReceiveBuffer(peerNetData, getStats());
-    }
-  }
-
 
 
   private DMStats getStats() {


[geode] 05/21: Revert "GEODE-2113 implement SSL over NIO"

Posted by on...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

onichols pushed a commit to branch release/1.9.1
in repository https://gitbox.apache.org/repos/asf/geode.git

commit 3ae6dd85c09a238f9c72ab1c28454d6bdfa05159
Author: Bruce Schuchardt <bs...@pivotal.io>
AuthorDate: Thu Jun 27 14:35:23 2019 -0700

    Revert "GEODE-2113 implement SSL over NIO"
    
    This reverts commit 588af8522a48b1fcaf045bb9ce028e4e8422dcba.
---
 .../apache/geode/test/dunit/internal/ProcessManager.java | 16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/geode-dunit/src/main/java/org/apache/geode/test/dunit/internal/ProcessManager.java b/geode-dunit/src/main/java/org/apache/geode/test/dunit/internal/ProcessManager.java
index c0e1237..d43b644 100755
--- a/geode-dunit/src/main/java/org/apache/geode/test/dunit/internal/ProcessManager.java
+++ b/geode-dunit/src/main/java/org/apache/geode/test/dunit/internal/ProcessManager.java
@@ -159,10 +159,22 @@ class ProcessManager implements ChildVMLauncher {
 
   private void linkStreams(final String version, final int vmNum, final ProcessHolder holder,
       final InputStream in, final PrintStream out) {
-    final String vmName = "[" + VM.getVMName(version, vmNum) + "] ";
+    final String vmName = "[" + VM.getVMName(version, vmNum);
     Thread ioTransport = new Thread() {
       @Override
       public void run() {
+        StringBuffer sb = new StringBuffer();
+        // use low four bytes for backward compatibility
+        long time = System.currentTimeMillis() & 0xffffffffL;
+        for (int i = 0; i < 4; i++) {
+          String hex = Integer.toHexString((int) (time & 0xff));
+          if (hex.length() < 2) {
+            sb.append('0');
+          }
+          sb.append(hex);
+          time = time / 0x100;
+        }
+        String uniqueString = vmName + ", 0x" + sb.toString() + "] ";
         BufferedReader reader = new BufferedReader(new InputStreamReader(in));
         try {
           String line = reader.readLine();
@@ -170,7 +182,7 @@ class ProcessManager implements ChildVMLauncher {
             if (line.length() == 0) {
               out.println();
             } else {
-              out.print(vmName);
+              out.print(uniqueString);
               out.println(line);
             }
             line = reader.readLine();